2010-09-01

Altera Quartus II Web Edition 10.0 with Lucid 64-bit, part 2

Today I finally managed to do my first "design" using Quartus II on Linux. It all went fairly well (thanks to various references and trial and error), but when programming was attempted, no hardware was supported.

"Oh well, must be some usb-device permission thingy", so after writing an udev permissions rule, I tried again. Still no hardware detected. How odd.

After several straces and googling like crazy, the issue seems to be that usbfs is missing from Lucid's kernel. And from the various messages and bug tracker entries, it's not coming back.

usbfs was mounted on /proc/bus/usb previously, but has been more or less replaced by udev (/dev/bus/usb).

After digging some more around, probably the best reference for the "solution" was this forum posting. The solution I opted to implement in the end was this:
  1. sudo mount --bind /dev/bus /proc/bus
  2. sudo ln -s /sys/kernel/debug/usb/devices /proc/bus/usb/devices
Yes, it's completely evil and you'll need to do the kludge after each reboot (but not too early, since it hides /proc/bus/pci and /proc/bus/input).

After the kludge, this is what jtagconfig says:
czr@igor:~/altera/10.0/quartus/bin$ bash ./jtagconfig
/home/czr/altera/10.0/quartus/adm/qenv.sh: line 109: warning: setlocale: LC_CTYPE: cannot change locale (en_US): No such file or directory
1) USB-Blaster [USB 2-1.2]
020B30DD EP2C15/20
So yes, it's a victory. Now Quartus also finds the USB-blaster (I'm using an older devkit with Cyclone II).

As a result, my infinitely simple (but probably quite patentable) "Three ANDs and an OR" circuit with 6 switch inputs and one LED output finally works. So, next step is obviously some "?" before the last step "profit".

Anyhow, thanks Altera for making an Linux version, and hopefully you'll manage to fix your programs so that they will work directly via udev built namespace soon (Quartus II 10.1 should be out before the end of this year).

2010-08-23

Altera Quartus II Web Edition 10.0 with Lucid 64-bit

Had some spare time at hand and decided to revisit the idea of doing something with an FPGA devkit that I have (Altera Cyclone II, basic devkit). Last time, Altera didn't really support Linux in any way and using windows to run the huge blob of software (Quartus II v7.0) didn't seem like a good idea.

So, congratulations to Altera on their first Linux version of Quartus. Things are looking up finally.

As is common with other commercial/closed softwares, Quartus only supports a limited set of target distributions (the RPM -based usual suspects), so getting stuff running on Lucid on 64-bit contained some surprises (however, not as many as I feared when I started).

The main problem is in the installer itself. It wants to use a version of libXi (X input client library) that contains a function ( XESetWireToEventCookie ) which isn't available in 6.1 version of the library anymore (one that Lucid uses). Also, the installer is a 32-bit binary (which is to be expected).

Here's the exact error that happens when running the installer directly:

./altera_installer_gui: symbol lookup error: /usr/lib32/libXi.so: undefined symbol: XESetWireToEventCookie

So, step by step guide to getting the blob to work on Lucid 64-bit:

  1. Download the installer from Altera website (it's a shell script containing a compressed binary installer). Do not yet execute the script.
  2. Download the Karmic version of libXi for 32-bit. Do not install the deb.
  3. Extract the deb: mkdir deb-extract && dpkg -x libxi6_1.2.1-2ubuntu1_i386.deb deb-extract
  4. Now, switch to the directory where you downloaded the Altera installer script. It needs to be extracted, but not yet run. Execute the following: sh altera_installer.external.sh --confirm
  5. The script will ask whether it's ok to extract stuff, answer yes.
  6. Then, the script will ask whether it's ok to run the installer ("OK to execute: ./altera_installer_gui --gui?"). Answer No.
  7. cd bin
  8. cp ../deb-extract/usr/lib/libXi.so.6 libXi.so
  9. LD_LIBRARY_PATH=$PWD ./altera_installer_gui --gui
  10. Enter ~/altera as the install directory and /tmp to hold the temp files.
  11. The installer will take a lot of time downloading the stuff (at least it did for me, and it wasn't for the lack of bandwidth at my side). Sometimes it will also abort the download (because "it failed") and ask you to restart the install. The partial downloads are kept in /tmp so eventually your installation should complete.
  12. Once the installer has downloaded everything, you have the option to start Quartus. You might want to check that it will run.

The program can be now run like this: ~/altera/10.0/quartus/bin/quartus (you may want to create a symlink or a menu entry, since those are not created on Lucid).

Interestingly enough, I was expecting to have to do a similar library copy for the installed Quartus, but it seems that it doesn't actually have the problem at all. Looks like only the installer has the issue. The software does run as a 32-bit process (verified via cat /proc/`pgrep quartus`/maps), but oddly, the installed directories contain a subdirectory with a name 'linux64' with 64-bit versions of "stuff". I haven't had the time to investigate whether it would be possible to get a 64-bit version running as well (seems that Altera wants extra money for the 64-bit Linux version, while a 64-bit Windows version is still available as Web Edition. Sheesh).

Now, you have to understand that I know pretty little of FPGAs and even less about Quartus. So, while I can definitely say that the main program will start with the above instructions, I have no idea whether all the pieces actually work (like ModelSim and all that). I'll continue this post once I have some time to actually learn something about the huge beast.

2010-08-20

intel, linux and hardware monitoring

Yes, it's time for another rant (plus I have some spare time at the moment). This time it's about Intel, the open-source poster-boy/girl. While I do applaud Intel for devoting some real resources for Linux support, it's somewhat sad that one important part has been lacking in serious attention: hardware monitoring.

You should take all of the content of this post as a personal opinion by me, especially because this post contains quite a number of speculative assertions.

Growing up with computers, I've tried to favor Intel motherboards (for at least the past 10 years), or at least Asus with Intel chipset. The reason is very simple: too many negative experiences with Linux and AMD-equipped motherboards. While I did quite like the processors from AMD, somehow they always managed to ally themselves with chipsets which were (for the lack of more fitting word) crap. VIA anyone? At least with Intel motherboards you could be fairly certain that Linux would run on them without too many problems.

Times change, markets evolve while quality devolves, and now the end result is that I've lost my love for Intel.

Besides the whole "Intel" GMA 500 debacle (I was a sucker), the quality of integration in the desktop motherboards has steadily decreased. Instead of using Intel's ethernet chips, you now have the same RTL/BCM crap that is used on el-cheapo motherboards. Instead of working SATA support, there was a phase where you could find pretty much any random PATA/SATA add-on chip on the motherboards. Obviously since SATA has wiped the floor clean of PATA, and Intel has got their act together, this isn't such a big issue as it was before. But my unwavering trust in Intel has.. wavered.

The remaining sore point for last years has been total lack of hardware monitoring support on Intel motherboards when using Linux. Previously, the sensor/monitor chips were connected to the chipset via straight I2C or SMBus (Intel "improved" I2C) or via ISA address accessed via a LPC-chip (which might've contained the ADCs/pulse counters internally).

I have no idea what went in the mind of Intel's hardware engineers when they decided to throw all the existing infrastructure away and replace it by something that they developed (and had to be better, right?).

Intel does claim that I2C (and hence SMBus) had the inherent problem of being unreliable in an electrically noisy environment such as the motherboard in a PC/server. Since I2C doesn't have built-in checksums or redundancy checks/corrections, it must be bad? Well, instead of making a new version of SMBus which would force the protocol to carry CRC, they decided to develop completely new signalling. Why? Patent/licensing lock-on of course.

So, together with ADI (Analog Devices Inc, now ON Semiconductor), Intel developed SST (Simple Serial Transport) that would "solve" all of I2C/SMBus problems. As Intel put it : "A bus was required to enable industry-wide compatibility with system management devices, such as temperature sensors and voltage monitors in computing applications". Yes. A bus. To connect industry wide compatible sensors. I2C? But how can you reap licensing fees from technology which is close to public domain? The specification for SST is probably available, but hidden behind NDAs so it won't be too helpful here. Shortly, it's a much higher frequency bi-directional serial bus which uses a mixed clock/data signalling similar to Manchester encoding but with some obvious twists in order to qualify as "original intellectual property".

As this wouldn't have been enough, at the same time Intel was ready to launch their TPM environment, which executes in various forms (now as as a separate MCU-part within the chipset, AFAIK). What would be better place to place the code that talks with the sensors than a secure sandbox within the chipset to which the end-user (and owner of the equipment) has no real access?

Intel pushed the solution as vPro, QST, AMT and various other marketing acronyms which changed with different processor and chipset combinations. Since the data is of actual value to corporate users (since home users don't need to monitor their systems it seems), there needed to be a way to access this data. Since all access to the protected sandbox needs to go via a single point of entry, HECI was born. An interface to talk with the sandbox (or the actual sandbox, the terminology isn't very clear since it's a fusion betwene marketing terms, obscure non-documenting documents and google lore).

Now, don't get me wrong Intel, but making it close to impossible to get useful environmental measurement data from the PC is only doing a disservice to you. But I guess you don't care. The feeling is mutual and from this day on I promise not to recommend your motherboards or chipsets over the competition.

Intel did try to attempt to push a HECI interface driver at some point. However, it was pretty much rejected the first time as there was no code that could use it. Once it was rejected, Intel managed to release the QST SDK for Linux as well, but it was already too late. I guess what happened was that the person working on the Linux support part either got a more fulfilling job or was transferred to do something else which was Important.

At 2.6.30, the HECI interface driver was added to the staging tree, but was removed at 2.6.32 (from staging) upon request from Intel. There have been some successes of using the QST SDK some time ago with older motherboards, but newer boards are probably beyond reach of open source still.

For amusement, you might check this bug report against lm-sensors. The closing entry in the ticket shows how much of good-will Intel has managed to gather during the past years of Linux support. Sadly, the times, they are a changin'. If you really want to give it a go, a good starting point might be the thinkpad wiki on AMT.

What's left then? ACPI? Using quality BIOS code? The day that ACPI is complete and bug-free in shipped products, I will eat my hat (I have several, and I will post a voting possibility so that a suitable hat may be selected).

I guess a solution which involves an AT90USB module with all the ADCs and counting logic, internal to the PC chassis might work. Not sure yet well driving the current to the various fans would be possible without too much of a power-hassle (I just haven't thought about it too much). Then, using internal USB connector, and talk to the measurement logic using regular userspace access. Any takers? (or even interest?).

2010-07-25

Canon P-150 scanning timings

The second power cable doesn't seem to be all that effective in boosting the speed of the scanner.

Using a custom SANE frontend, the timings for scanning 3 duplex A4 pages with B&W and 600DPI settings:

0: TIMING 17555283 usecs
1: TIMING 25179 usecs
2: TIMING 14859625 usecs
3: TIMING 24944 usecs
4: TIMING 15765811 usecs
5: TIMING 8385 usecs

With the second USB cable connected:
0: TIMING 14738761 usecs
1: TIMING 21129 usecs
2: TIMING 13816726 usecs
3: TIMING 17589 usecs
4: TIMING 13548819 usecs
5: TIMING 25447 usecs

The timings vary quite a bit (plus minus one second), I'm suspecting the backend does some silly synchronization dance with the scanner, but using the second cable causes the scanner to scan marginally faster (it seems that the scanner doesn't pause as often with the second cable).

I should also mention that there seems to be some stability issue with the second cable (the scanner jams internally and the backend returns Error during device I/O at sane_start()).

Also, while developing the frontend, it becomes clear that the backend/scanner combination has some issues when problems appear. It seems that the backend is unable to reset the scanner at all and one must resort to closing the scanner case physically (which power offs the scanner) and opening it again. Luckily the feeding mechanism is pretty simple to clear from jams (the scanner doesn't autofeed the page if it starts in jammed position).

2010-07-24

Canon P-150 and Linux

The time is ripe for a blog methinks, and what better way to start it than a rant about proprietary drivers from a large multinational and a product review (all in one!).

I set out to find a suitable scanner for developing OCR software related to accounting and invoice processing. Since using Windows is a big no-no for me personally (due to many reasons), I knew that the project would become "interesting" very quickly. Another factor contributing to the problem was that my task was outside the regular hobbyist needs, which limits the amount of pre-existing information available via googling.

The scanner I was looking for would preferably have all of these features (in this order):
  • Linux drivers
  • Support duplex scanning (a lot of paper invoices in Finland are printed on both sides)
  • Support multiple page feeding without operator intervention
  • Take as little space on the desktop as possible.
  • Be not too expensive.
  • Be relatively easy to find in Finland
Now, since I'd really like to support HP in their Linux support (hplip), that was the natural starting point for my enquiries. Sadly, there doesn't seem anything from HP that I could choose upon. Also, in the document imaging category, the price range gets up quite quickly and it then becomes quite hard to find reviews and comparisons of devices on the web.

After some time spent googling and reading semiautomated link-hording review-sites (always fun), I ended up with Canon P-150. It fits the bill for all of my requirements, and at least some of the existing reviews had positive outcomes.

Now, I've fought many a battle with Canon and their Linux "support" over the years. So, I knew that even if that they market that a "SANE compatible Linux driver" is available, I took it with some grain of salt. But they have a driver at least, how bad can it be?

Using a scale of 0-10 where 0 means that the vendor is a complete Microsoft-lackey to 10 being .. well, I don't really know what, I'd like to write Intel or HP here, but really, even they're closer to 6-7 on this scale. So, let's assume 10 means a vendor that supports Linux on all their products that they sell or at least provide complete documentation on how to implement 100% support on Linux. None of the product vendors in the current mass market fit this bill.

So, back to Canon. Previously I'd set them in somewhere 3-4 on average Linux support.

Using a spare day (it's still my holiday), I bought the scanner and started playing with it. True to the existing reviews, the scanner is quite compact and does seem like a nice piece of hardware.

Some sore points about the product (none of which were show-stoppers for me):
  • Using a gloss finished black plastic parts is bad. While it definitely adds to the wow-factor, having your fingerprints all over the device does not.
  • The guides on the device are all plastic. While this won't be a problem for stationary device use, it might become a problem if you lug the device around or decide to pack it away from the desktop. This will probably end up breaking the guides in the end.
  • You might need two USB ports to power the device. I've only used on USB port so far and haven't tested whether the scanning is any faster with more power over the USB.
(2010-07-26 update after scanning about 500 two-sided paper sheets at 600DPI, B&W): The ADF needs serious hand-holding. It's not possible to leave the scanner working on it's own. Especially the first page to be scanned and the last page need manual "twiggling" in order to be fed into the mechanism. Having more papers helps, but this is a serious drawback in the ADF. Another issue is that now and then the scanner decides to go into "ADF jammed" state (or just sits there while it has fed a bit of the next page), and will not come out of it short of a full power-cycle. So, I wouldn't recommend this scanner for automated pipelines, since an operator is necessary at all times. Which is a shame, the scanner otherwise has performed quite nicely.

Now, back to Linux.

The drivers that are available from Canon support site (which are relatively easy to find), come in zipped files.

Inside the ZIP (d1024mux.zip) you'll find a deb, rpm and source tarball for both P-150 and DR-150 for the SANE backend.

Some issues with the current drivers (1.00 - 0.02, which is the first release and I doubt there will be a subsequent release):
  • The deb and RPM files are for 32-bit mode only. While the 64-bit Linux distros support running mixed binaries easily nowadays, having only 32-bit drivers is an issue for SANE backends. The backend will be loaded via dlopen (as an .so file), so it can't be used in a 64-bit program (utilizing SANE). This means that you can't use the binary packages if you're running a 64-bit Linux (without doing all kinds of irritating operations first).
  • The Debian packaging control file is done wrong. It uses temporary paths as the file members, and the end result is dpkg -L spewing out paths that just aren't there (most of the files are placed under /opt/Canon/ at install time).
  • The source tarball is also interesting. It is a mixture of proprietary binary only code, pre-built binary components and source code. How nice.
So, assuming you're running a 64-bit host, what to do? I wish I could say "easy", but.. I guess SANE doesn't really support proprietary binary drivers so the process is easy to muck up like Canon has.

So, the process goes more or less like this (assuming a Debian-like target):
  1. Retrieve the source tarball for sane-backends-1.0.19. This is the version that is mentioned in the somewhat terse README from Canon.
  2. Extract it in parallel directory with "cndrvsane-p150-1.00-0.2". Yes, it needs to be parallel, since the makefile rules within the cndrvsane use a relative path addressing (../../sane-backends-1.0.19).. How nice.
  3. configure and make the sane-backends first. Do not install. Also, my build din't actually even finish, but it doesn't seem to matter. The only thing that is necessary from this step is the sane backends config.h and the dependency files (courtesy of libtool, our "helper").
  4. Switch to the cndrvsane source directory
  5. run configure
  6. fakeroot make -f debian/rules binary
  7. This should result in the proper deb file that can be installed. The file list will still be wrong as per the original deb, but at least the architecture is mostly correct. Most of the files will install under /opt/Canon.
  8. Make a symlink from /usr/local/lib/canondr to /opt/Canon/lib/canondr . Based on stracing (with -f) scanimage, this is the path under which the backends are accessed for some reason and the symlink is not otherwise made properly (I was too lazy to fix the deb control files, and it shouldn't be my job).
The proprietary bit is the 32-bit binary called "canondr_backendp150". It's an application written in C++ and links against libpthread. What are the odds of it being deadlock free? Your guess is as good as any. Since the backend will run in a separate process from your SANE frontend, it can stay 32-bit (as long as your system can run 32-bit C++ programs).

The client and library shim parts are under GPL (although the file copyright headers do suggest that Canon reserves all rights to them, which to my mind is just plain wrong, especially since the shims don't seem to do much of anything except pass the stuff onto the backend). IANAL.

What's left is writing up a proper udev rule for the scanner and then playing with the scanner using scanimage. Other pages cover this well enough, so good luck with that (just remember to switch off the scanner from the auto-connect mode).

The only feature that I've been unable to use so far, is the top-panel button. Seems like there are two principal ways of doing this:
  • scanbuttond, which uses libusb to poll the button states using its own backend code which has been reverse engineered from USB traffic dumps. The project seems dead, or at least in deep hibernation. Needless to say it doesn't support P-150.
  • kscannerbuttons, which uses --wait-for-button functions in existing SANE backends. Since the proprietary P-150 backend doesn't support this option, kscannerbuttons can't be used.
Parting words to Canon:
It would be so much easier for me to recommend your products without your shenanigans with Linux support. Even having a public contact point for Linux issues would be nice, so I could report the issues. Heck, I could even send you patches if you'd just give me a chance.

So, Canon stays in the 3-4 category for now (it could be worse).