Thursday, November 20, 2014

SMILI: A simple open-source framework for scientific visualisation

I'm pleased to announce the release of SMILI, a simple framework for scientific visualisation, under a BSD license on SourceForge and GitHub.


In this post, I thought I would cover some of the cool features and command-line utilities available for researchers and developers. The overall documentation can be found here. This post will hopefully be complimentary to this and the videos of SMILI already available on YouTube.

Some of the additional features of SMILI are the ability to share the same processing and display capabilities of the GUI applications with their command-line counterparts. For example, you have the following command-line tools to assist you in your research:

  1. milxImageViewer and milxModelViewer - These are fast, no-nonsense viewers for n-D images and 3D polygonal data, such as triangulated surfaces. They don't load pesky plugins, they just allow you to view your data fast. The right click options are still available, especially the processing elements. However, any interaction between windows is no longer possible for obvious reasons. You can use sMILX for this.
  2. milxOverlay and milxAnimate - These applications allow you to take screenshots/movies of models and images together with pre-defined views (which can be created in sMILX). This is great for visually inspecting your results or for websites when there are a lot of them. There is a batch script in 'scripts/' directory to batch these applications over multiple threads for quick generation.
  3. milxImageApp and milxModelApp - These are the 'swiss-army knife' like applications for images and models respectively. With them you can simply process all the images/models you provide via the command-line in the same way with many of the algorithms available in sMILX. For example, to threshold labelled images (having Nifti format) within a certain range and storing result in 'auto', we could simply:
    milxImageApp --threshold --above 225 --below 180 *.nii.gz -p auto/auto_
  4. milxLabelVisualisation - Sometimes it is necessary to visualise labelled images using iso-surfacing or marching cubes or volume rendering with certain colour maps. This application provides a way of doing these things with off-screen rendering.
  5. milxAssistant - This application provides a simple web browser interface to explore the SMILI documentation in a similar fashion to Qt Assistant.

As updates occur, I will post notices on twitter and Google+. Major development updates will be posted here in this blog. Currently, a journal publication for SMILI is under review entitled:
"Focused Shape Visualisation via the Simple Medical Imaging Library Interface"
Visualisation and Computer Graphics, IEEE Transactions on, 2014, Submitted
Upon acceptance, more details, plugins and revision history will be released. If you have any feedback, feel free to post as a ticket, email the mailing-list or message me on SourceForge.

Cheers Shakes - L3mming

Monday, October 6, 2014

Robust Digital Image Reconstruction Example

In this post, we discuss how to employ the digital image reconstruction technique of Chandra et al. (2014):

Robust digital image reconstruction via the discrete Fourier slice theorem
S Chandra, N Normand, A Kingston, JP Guedon, I Svalbe
IEEE Sig. Proc. Lett. (2014)

using the FTL (implemented in C, available via LGPL license).

This method takes a sufficient set of discrete (rational angle) projections assuming the Dirac pixel model, i.e. digital image sampling where lines have said to have sampled a pixel iff the line goes through the centre of the pixel, and reconstructs them in O(nlogn), where n=N^2. Sufficiency is classified as those projections meeting the Katz criterion, i.e. basically all bins are sampled at least once and no unambiguous  solution (i.e. a ghost) cannot fit within the image. See also:

Fast Mojette transform for discrete tomography
SS Chandra, N Normand, A Kingston, J Gu├ędon, I Svalbe
arXiv preprint arXiv:1006.1965

Once you have FTL built, you should have four binaries for this method:

  • fmt_angles - Select the angle type and generate the rational angles for given n and N, the image and reconstruction (FFT space) sizes respectively.
  • mt - Compute the discrete (rational angle) projections of the image, also known as the Mojette transform (MT).
  • mt2frt - Convert the projections to those of the FRT/DRT, which are the inverse FFTs of the slices of the 2D FFT.
  • ifrt - To reconstruct the resulting FRT projections in O(nlogn), no relation to the n before.

To illustrate the process we given a tutorial here of the whole process.

1. First, we crop Lena image to a 128x128 image of Lena from the centre:

./crop lena512.pgm 128 128 0 lena128.pgm


2. We create the angle set, we choose the L1 minimal set since it has a nice symmetry:
./fmt_angles 128 256 1 mt_angles_128_in_256.txt
3. Next we compute the MT:
./mt lena128.pgm mt_angles_128_in_256.txt mt_lena128.pgm
    Note that if you already have projections, such as those of a sinogram, then see this Google Groups     discussion. You can find the publication by my colleague Andrew Kingston on how to do this here.
4. Convert the MT projections into FRT space
 ./mt2frt mt_lena128.pgm mt_angles_128_in_256.txt 128 256 1.0 frt_lena128.pgm
5. Invert the FRT projections in O(nlogn) using the discrete Fourier slice theorem
./ifrt frt_lena128.pgm recon_lena128.pgm

This gives our nxn result reconstructed and padded into the NxN space.

HTH
Cheers Shakes - L3mming

Saturday, August 9, 2014

Australian Dell XPS 13 Touch (9333) 2014 with Ubuntu/Linux Experiences

After many hours trying of trying to get my old laptop battery life above 3 hrs, I finally bit the bullet and bought a new laptop.

I eventually went for the Dell XPS 13 Touch (i7, 256 GB SSD, 8 GB RAM) and I'd thought I should put down a few words about my experiences for anyone looking for a good portable developer laptop.

My findings in summary: a damn fine developer laptop! Great backlit keyboard, Ubuntu works (details below), fantastic battery life, quiet and portable to the max.

Basically, weights as much as a 10" tablet, about the same size despite having a 13" Full HD touch screen. First day I took it to work and on battery compiled ITK within 10 minutes (as well as some custom libraries), installed a ton of stuff from the repositories, browsed the web, did some development and remote login work and still only consumed only two thirds the battery at the end of the day. My colleagues also loved the styling and were impressed overall.

Why not a Mac Book Pro?

I have to say the Mac was tempting. It was a little cheaper (lower spec though), probably more robust (by a little) and more battery life (only just) and Unix style OS with native MS Office. In the end, and after having used a Mac for a few weeks now for deploying some software, it became a matter of personal belief of what software should be about: FLOSS.

For example: It seems to customise and to get 'non-standard' elements working requires buying stuff. Want to write to NTFS filesystems? Buy software. Want to customise dock? Buy app. Even the open-source and free SciTE is available for purchase on the app store. Sigh. This together with no middle mouse click, no uninstall for installers and deployment issues for 10.6 and 10.9 push me over the edge and I don't regret one bit (so far... hehehe).

Installing Ubuntu

Installing Ubuntu was almost seamless. As per usual, follow all the backup routines etc. before installing. A few minor hurdles were encountered:
  • Kernel 3.14 is required to get the best battery life for i7's, thus I used distrowatch.com search page to work out the distros that had the latest linux kernel. This was OpenSUSE factory and Ubuntu Snapshot
  • I used the Ubuntu snapshot (06/08/2014) and the iso to usb installer that comes with Linux Mint (which was installed on my desktop) to put it on a USB stick since the XPS 13 has no DVD-drive.
  • Make sure to shrink the Windows partition in Windows as during the partitioning stage, it kept trying to install Ubuntu to the USB stick since it was a large (16 GB) stick and only disk with space, making it difficult to resize in the installer itself. With all the EFI stuff, it better to let the installer set it up for you. OpenSUSE installer did the same thing too.
  • After the installer finished and I rebooted, everything worked! Even the touch screen. I was shocked lol. Not usually my experience in the last 10 years with Linux of any flavour. Linux has come a long way in the last few years! Even Ubuntu looked fabulous and it was good to hear the drum sound again after being away from it for so long. ;D
  • When I say everything, I kind of exaggerated a little. the Wifi didn't because of propriety firmware that just needs linux-firmware-nonfree package installed.
  • The touch device still has a bug and is not recognised properly, though it works fine. I got middle mouse click working with three finger tap (disabled on Ubuntu by default.... wtf).
Hope this helps someone.

Cheers Shakes - L3mming

Sunday, July 27, 2014

Linux Distros for Laptops and Power Saving Tweaks for a 2nd Gen i7 Machine

In this post I will note down my experiences in finding a suitable Linux distro for my HP Pavilion dv6 and some power-saving or improving battery life tweaks I now use.

This laptop is really a portable desktop, that is a burn-top rather than a laptop. It is heavy (~2.5 kg), power hungry but very powerful, when it is not overheating and burning your lap.

The first distro I tried was OpenSUSE 12.3 with KDE. This worked out of the box with everything and was a joy to use when AC is plugged in. However, the battery life wasn't great (~1 hour) and it over heated constantly. I upgraded to 13.1, but that process broke my C++ development environment as some libraries became unreachable. Probably fixable but I felt it time to try something else/new.

Battery life and overheating

To improve battery life, I installed TLP and followed this very useful article on howtogeek to tweak this as much as possible.

It is also important to update to the latest video card drivers, AMD/ATI drivers in my case. ATI has settings in Catalyst for power saving and OpenSUSE has improvements for Radeon cards. To get a sense of what is consuming the power, install PowerTop.

To reduce overheating, I underclocked the CPUs to a max of 1.8GHz from 2.0GHz on AC power, to 1.2 GHz and changed the default governor to 'powersave' when on battery. BTW, this trick also works on windows by changing the max execution from 100% to 90% etc. (see this post). This improved battery life and reduced stuttering from overheating, especially if streaming video.

The result was an increase of battery life from 1hr to 2.5 hrs and almost zero fan noise on OpenSUSE 13.1 KDE. All bets are off when you start compiling code however. ;)

Linux Distros for Laptops

KDE is probably the best desktop manager right now along with Cinnamon, but both are processor hungry etc. and suited to desktops so I started to look into other options.

I read that XFCE and LXDE managers would be better for laptops in various places. I had tried these before a while ago and they were very lightweight and snappy. But they were missing important customisation features and looked too stripped down and ugly a few years ago.

I tried XFCE via XUbuntu 14.04 and it was great and looked stylish. Installed flawlessly, boots fast and just works (WiFi, boot splash, suspend etc.). The search feature in the menu like KDE and Cinnamon is also great, but had issues picking out all the installed apps. My touch pad lost its ability to track three fingers and that, combined with a warning that BIOS detected 'overheating', pushed me to look elsewhere. I would reccomend this for older machines or those lacking CPU power.

Next one I tried was elementaryOS, an Ubuntu based distro which I saw running on a colleague's machine. It looked stunning, snappy and workable. After installation, I got the same BIOS error (not surprising since the stable version at the time was based on Ubuntu 12.04) and the panel was very difficult to customise. It looked so good I wanted to stay but even after an hour of searching on the web I couldn't get a working CPU monitor or weather indicators. A distro I would setup for the Mrs or family as it was clean and elegant.

In the end, I ended up re-installing XFCE version of OpenSUSE 13.1 and haven't looked back (so far). The Ubuntu version of XFCE looks better, but the BIOS error disappeared and the track pad worked again. Combined with Cairo-Dock, the final result is reasonable looking and battery life slightly better than the KDE version. I simply switch between KDE and XFCE depending if on battery or not.

OpenSUSE with XFCE and Cairo-Dock
Hope this is useful to someone.

Todo: I need to try the XFCE version of Linux Mint and Lubuntu.

Cheers Shakes - L3mming

Monday, November 25, 2013

Getting Android CMake to work....

I recently wanted to get one of my libraries - FTL to build and run on my nexus 7 tablet. Since everything is done via CMake, the obvious question was could I just get the Android NDK toolchain setup and building without any changes to my library? (For those looking to setup it up manually without CMake see this post)

First step.... getting Android CMake to work

There are a number of things spread around the net but nothing conclusive to point out obvious noobie mistakes I came across.... so I documented my findings just in case someone else has the same issues:

1. After setting up the NDK r9 x64 version as:

export NDK=~/Dev/android-ndk-r9
$NDK/build/tools/make-standalone-toolchain.sh --platform=android-5 --install-dir=$HOME/Dev/Install/android-toolchain --toolchain=arm-linux-androideabi-4.8
 
It is very important to put the whole path in and not the './' instead. Otherwise it will complain about the NDK install not being found. The toolchain should match your device. [From Android CMake docs]

2. Next you need Android CMake. The project on Google Code is no longer supported, but the OpenCV version appears to be the latest version, so grab it from here. You just need the .cmake file, so I grabbed the Google Code version and replaced the android.toolchain.cmake file with the latest version. I placed this at ~/Dev/Install/android-cmake.

3. Test it on the Android CMake samples available on the Google Code repository. Setup the environment:

export ANDROID_NDK=~/Dev/android-ndk-r9
export ANDROID_NDK_STANDALONE_TOOLCHAIN=~/Dev/Install/android-toolchain

export ANDROID_NDK_TOOLCHAINS_PATH=$ANDROID_NDK
export ANDROID_TOOLCHAIN_NAME=arm-linux-androideabi-4.8

export PATH=$ANDROID_NDK_STANDALONE_TOOLCHAIN/bin:$PATH
export ANDROID_CMAKE=~/Dev/Install/android-cmake
export ANDTOOLCHAIN=$ANDROID_CMAKE/toolchain/android.toolchain.cmake

alias android-cmake='ccmake -DCMAKE_TOOLCHAIN_FILE=$ANDTOOLCHAIN -DANDROID_NDK=$ANDROID_NDK -DCMAKE_C_COMPILER=arm-linux-androideabi-gcc -DCMAKE_CXX_COMPILER=arm-linux-androideabi-g++ -DANDROID_TOOLCHAIN_NAME=arm-linux-androideabi-4.8 -DANDROID_NDK_TOOLCHAINS_PATH=~/Dev/android-ndk-r9'

The last 'alias' command should be on one line or break it up slashes. The tricky bit is the distinction between the ANDROID_NDK and the ANDROID_NDK_STANDALONE_TOOLCHAIN. The latter is good for directly accessing the compilers etc. and the former is needed by Android CMake to find the NDK. If not set correctly, you get the "Could not find any working toolchain in the NDK. Probably your Android NDK is broken" message.

If all goes well then doing the following will work:

mkdir build
cd build
android-cmake ..
make -j 4

At 'android-cmake' line, I used the Curses front-end for CMake. Replace it with your preferred front-end. The rest of the documentation for Hello-CMake example should work fine. More on Android CMake porting of libraries later....

Cheers Shakes - L3mming

Sunday, August 15, 2010

Slices, Stacks and ITK

I recently had to program some registration and decided to use ITK. However, there are a number of things I could not find well documented, so I thought I'd put stuff I had to figure out in this post, in the hope it will be useful to others.

Firstly to load a TIFF series or image stack, which is well documented in the ITK Software Guide (Section 7.11), the code looks like this:
 
typedef unsigned short PixelType;

typedef itk::Image< PixelType, 3 > ImageStackType;
typedef itk::ImageSeriesReader< ImageStackType > ReaderType;
typedef itk::NumericSeriesFileNames NameGeneratorType;


///Generate Numerical File Names
NameGeneratorType::Pointer nameGenerator = NameGeneratorType::New();
    nameGenerator->SetSeriesFormat( argv[1] );
    nameGenerator->SetStartIndex( first );
    nameGenerator->SetEndIndex( last );
    nameGenerator->SetIncrementIndex( 1 );

///Read Stack as TIFFs
ReaderType::Pointer stackReader = ReaderType::New();
    stackReader->SetImageIO( itk::TIFFImageIO::New() );
    stackReader->SetFileNames( nameGenerator->GetFileNames() );
    stackReader->Update();


Here argv[1] will be an sprintf format string like 'file%03d.tif' for reading files named file001.tif, file002.tif etc. The typedefs may appear to be overly used, but results in much leaner and reusable code.

To view the stack, use the ImageToVTKFilter (found following these intructions) like:

typedef itk::ImageToVTKImageFilter< ImageStackType >  StackConnectorType;

///Export to VTK
StackConnectorType::Pointer stackConnector = StackConnectorType::New();
    stackConnector->SetInput( imageStackFiltered );
    stackConnector->Update();
    stackConnector->GetOutput()->GetExtent(bounds);
    cout << "Size: " << bounds[1] << "x" << bounds[3] << endl;

///Display
DGVImageVTK *imageStackView = new DGVImageVTK;
    imageStackView->alignImages(false); //Has no effect?
    imageStackView->setName("Image Stack");
    imageStackView->SetInput(stackConnector->GetOutput());
    imageStackView->generateImage();
    imageStackView->show();


The result of the filter is passed to my DGV Image Class which wraps VTK. You can find DGV here. You should be able to pass the connector to vtkImageViewer2 class also.

There are two main posts/email-list entries that are useful for slice traversal and processing. The first is the method using itk::PasteImageFilter. I found this to work but contained large amounts of code and was very slow.

The second method was the JoinSeriesImageFilter, which a lot better but missed some newbie ITK stuff, which I missed and resulted in all slices of the result containing the same slice. The end working result is code that looks like:

///Iterators
typedef itk::ImageSliceConstIteratorWithIndex< ImageStackType > SliceConstIteratorType;
///Extractors
typedef itk::ExtractImageFilter< ImageStackType, ImageType > ExtractFilterType;
typedef itk::JoinSeriesImageFilter< ImageType, ImageStackType > JoinSeriesFilterType;

///Traverse through slices and produce new output stack
///Setup Slice Iterators which will iterate through slices in the stack
SliceConstIteratorType inIterator( imageStackFiltered, imageStackFiltered->GetLargestPossibleRegion() );
    inIterator.SetFirstDirection( 0 ); ///x axis
    inIterator.SetSecondDirection( 1 ); ///y axis

///Setup Image Stack that will be Joined together
JoinSeriesFilterType::Pointer joinSeries = JoinSeriesFilterType::New();
    joinSeries->SetOrigin( imageStackFiltered->GetOrigin()[2] );
    joinSeries->SetSpacing( imageStackFiltered->GetSpacing()[2] );

for(inIterator.GoToBegin(); !inIterator.IsAtEnd(); inIterator.NextSlice())
{
    //cout << inIterator.GetIndex() << endl;

    ///Setup region of the slice to extract
    ImageStackType::IndexType sliceIndex = inIterator.GetIndex();
    ExtractFilterType::InputImageRegionType::SizeType sliceSize = inIterator.GetRegion().GetSize();
        sliceSize[2] = 0;
    ExtractFilterType::InputImageRegionType sliceRegion = inIterator.GetRegion();
        sliceRegion.SetSize( sliceSize );
        sliceRegion.SetIndex( sliceIndex );

    ///Pull out slice
    ExtractFilterType::Pointer inExtractor = ExtractFilterType::New(); ///Must be within loop so that smart pointer is unique
        inExtractor->SetInput( imageStackFiltered );
        inExtractor->SetExtractionRegion( sliceRegion );
        inExtractor->Update();

    ///Operate on Slice
    InvertorType::Pointer invertor2 = InvertorType::New(); ///Must be within loop so that smart pointer is unique
        invertor2->SetInput( inExtractor->GetOutput() );
        invertor2->Update();

    ///Save Slice
    joinSeries->PushBackInput( invertor2->GetOutput() );
}

///----------
///Write out multi-page TIFF of the result
joinSeries->Update();
WriterType::Pointer writer = WriterType::New();
    writer->SetFileName( "registered_stack.tif" );
    writer->SetInput( joinSeries->GetOutput() );

try
{
    writer->Update();
}
catch( itk::ExceptionObject & err )
{
    std::cerr << "Write Output Exception caught !" << std::endl;
    std::cerr << err << std::endl;
    return EXIT_FAILURE;
}


The iterator goes through all slices without needing to know anything about dimensions or sizes. You will have to tell the ExtractFilter the slice to extract, this is described using ImageRegions.

The operation applied to the slices is simply an inversion of the greyscales, especially useful if the slices are negative images. Note that the pointers are declared within the loop, this is important because the pointers remain valid only within the loop and automatically deleted since we are using SmartPointer's. The effect is that ExtractFilter then points correctly to the current slice. The result is written as a multi-page TIFF file, which you can open in ImageJ etc.

The above operation was used as a test, which will be replaced by registration code. If you are applying filters slice by slice, look into the SliceBySliceImageFilter that can be found in the Code/Review branch of ITK atm. To invert greyscales and then rescale intensities on each slice, you get:

typedef itk::InvertIntensityImageFilter< ImageType >  InvertorType;
typedef itk::RescaleIntensityImageFilter< ImageType >  RescaleIntensityType;
typedef itk::SliceBySliceImageFilter< ImageStackType, ImageStackType, InvertorType, RescaleIntensityType > SliceFilterType;

///Filter Stack
InvertorType::Pointer invertor = InvertorType::New(); ///Invert Greyscales
RescaleIntensityType::Pointer rescaler = RescaleIntensityType::New(); ///Normalise image values
    rescaler->SetOutputMinimum( 0 );
    rescaler->SetOutputMaximum( ImageMax );
    rescaler->SetInput( invertor->GetOutput() );

///Apply the filters to each slice of the stack
SliceFilterType::Pointer sliceFilter = SliceFilterType::New();
    sliceFilter->SetInput( 0, imageStack );
    sliceFilter->SetInput( 1, imageStack );
    sliceFilter->SetInputFilter( invertor );
    sliceFilter->SetOutputFilter( rescaler );
    sliceFilter->Update();


Thats it for the moment. More on registration later and code release later.

Hope that helps.
Cheers Shakes - L3mming

Saturday, June 19, 2010

Packaging Qt Applications for Ubuntu/Debian

I have recently attempted to get my Discrete Geometry 3D Viewer capable of building Debian packages, so that one may install DGV without needing to compile it or worry about dependencies for other Ubuntu Distros (like the question I got from a potential user).

I believed there would be a substantial discussion on how to do this for Qt applications, but I could only find the Maemo Guide useful. It gets even more difficult if you want to do multiple binaries from a single source. Hence, I have documented my findings of this topic in this blog.

Useful Links that I used:
Complete Ubuntu Packaging Guide
Qt App Maemo Guide
GPG Guide
Pbuilder Howto
Using Local Packages

Firstly, my situation is the following. I have three side-by-side dynamic libraries, dgv-base, dgv-contrib and dgv-vtk. The libraries are divided based on dependencies of Qt, None, and dgv-base & Qt & VTK respectively by design. Then there's the actual DGV application which depend on these libraries.

First the multiple libraries. Begin by renaming the source directory, with the name of the package and the version (with a dash as the separator, this is important). For example
libdgv-0.15

Assuming that your source is in the state you wish to distribute it, create the tarballs so that you have two tarballs
libdgv-0.15.tar.gz
libdgv_0.15.orig.tar.gz
Note the underscore between the library name and the version. This is very important.

Change to the directory of the source and execute
dh_make -e your.maintainer@address -c GPL
where the "-c GPL" assumes you are using GPLv3 license and "your.maintainer@address" is your email address. This will ask you a series of questions, where you should select library if you're doing a library or a single binary if you're doing a binary. I will assume a library for the aforementioned reasons.

This step will create a directory called "debian" with all the Debian package configuration files. Remove the example files, they are not needed for what we are doing as far as I know.
rm *.ex *.EX
Please read these pages of the packaging guide for the remaining files. You need to edit the control and rules files as I have for DGV (also see Maemo guide for a simple single binary example). Things to watch out for is also given in this link (at the end). Fill the changelog and copyright files as described by the package guide, making sure to use the name of the overall package where applicable and to match the author email to your GPG name. If you do not have one, you need to create one using the GPG guide to sign your packages.

For multiple binaries, there is one last step. You need to create the ".install" and ".dir" file for each package. The former has the list of files to be installed, but in a wild card format (e.g. "usr/lib/lib*.so"). The latter is where the files are to be installed (e.g. "usr/lib").Note that there is no "/" in front as per normal Linux directories from root. This is because they will be relative paths and when the real package is built, it will build it from the root. See the debian files I created for DGV here.

Now to build your package, we will use Personal Builder (pbuilder), a way to build your package using only the minimal (initial) Ubuntu base/setup plus your dependencies. This is the best part that makes constructing packages most useful. If your dependencies are correct, then the user just installs the package and Synaptic or apt-get just installs everything you need to get it going. Once a pbuilder environment is setup (which can be made to suit for building on different architectures and Ubuntu distros), the package is built with your configuration and placed into
/var/cache/pbuilder/result
Only downside is that all the base packages will be downloaded from the Ubuntu repository, which could take a while. If you can't use the Internet for whatever reason, you can still build it the packages using
debuild
when within the source directory. You might also want to do this initially to ensure that its all working correctly.

First create the pbuilder environment by
sudo pbuilder create --distribution $(lsb_release -cs) \
        --othermirror "deb http://archive.ubuntu.com/ubuntu $(lsb_release -cs) main restricted universe multiverse"
Then build the descriptor file
debuild -S
If you did a "debuild" already, then this step is unnecessary. Once the descriptor file is present, then build using pbuilder
sudo pbuilder build *.dsc
To build as a different distro or architecture, see this. Hopefully, all works fine and you get a few packages. There might be a few warning from Lintian. Ensure that the descriptions lines are not too long or google the lintian warning. If your warning says "empty-binary-package" then your files are installed in the wrong places. For Qt projections, I install the files of the library into "debian/tmp", then move it out into the relevant package directories by using "dh_movefiles -p$@ -Xcontrib -Xvtk usr/lib" command in the rules (see the wiki). The "-Xitem" tell the movefiles app to ignore the files with the string "item" from the file names.

Finally, on a Live CD or clean install, check you packages by installing them. Pbuilder didn't pickup the fact that I had incorrectly named libvtk5.2 as libvtk5 for one of my packages, so I recommend it.

I then build the dgv application and its all done. Hope this helps. I will post up more on how to use local packages later when I have it working.

Cheers Shakes - L3mming