Wednesday, November 13, 2013

Apple Breaks Apache Configurations for Gitit (Again)

I'm not quite sure why I put myself through this, but I upgraded my Mac Pro to Mavericks. This broke my local Gitit Wiki. The symptom was that Apache was unable to start, although nothing would be written in the error logs. To determine what was wrong I used sudo apachectl -t. The installer did preserve my http.conf, but wiped out the library mod_proxy_html.so that I had installed in /user/libexec/apache2. See this old entry that I wrote back when I fixed it for Mountain Lion here.

I installed XCode 5 and I thought I was set, but there is more breakage. You might need to run xcode-select --install to get headers in /usr/include. The makefile /usr/share/httpd/build/config_vars.mk is still broken in Mavericks, so commands like sudo apxs -ci -I /usr/include/libxml2 mod_xml2enc.c won't work.

To make a long story short, I got the latest (development) version of the mod_proxy_html source, these commands worked for me:

sudo /usr/share/apr-1/build-1/libtool --silent --mode=compile --tag=CC /usr/bin/cc -DDARW
IN -DSIGPROCMASK_SETS_THREAD_MASK -I/usr/local/include -I/usr/include/apache2 -I/usr/include/apr-1 -I/usr/include/libxml2 -I
. -c -o mod_xml2enc.lo mod_xml2enc.c && sudo touch mod_xml2enc.slo

and

sudo /usr/share/apr-1/build-1/libtool --silent --mode=compile --tag=CC /usr/bin/cc -DDARW
IN -DSIGPROCMASK_SETS_THREAD_MASK -I/usr/local/include -I/usr/include/apache2 -I/usr/include/apr-1 -I/usr/include/libxml2 -I
. -c -o mod_proxy_html.lo mod_proxy_html.c && sudo touch mod_proxy_html.slo

Previously, this gave me .so files in the generated .libs directory, but now I just have .o files and I'm not sure that's what I want.

Sunday, August 11, 2013

More Crappy Print-on-Demand Books -- for Shame, Addison-Wesley "Professional"

So, a while back I wrote about some print-on-demand editions that didn't live up to my expectations, particularly in the area of print quality -- these Tor print-on-demand editions.

Now, I've come across one that is even worse. A few days ago I ordered a book from Amazon called Imperfect C++ by Matthew Wilson -- it's useful, thought-provoking material. Like the famous UNIX-Hater's Book, it's written for people with a love-hate relationship with the language -- that is, those who have to use it, and who desperately want to get the best possible outcomes from using it, writing code that is as solid and portable as possible, and working around the language's many weaknesses. (People who haven't use other languages may not even be aware that something better is possible and that complaints about the language are just sour grapes; I'm not really talking to those people).

The universe sometimes insists on irony. My first copy of Imperfect C++ arrived very poorly glued; the pages began falling out as soon as I opened the cover and began to read. And I am not hard on books -- I take excellent care of them.

So I got online and arranged to return this copy to Amazon. They cross-shipped me a replacement. The replacement is even worse:

Not only are the pages falling out, because they were not properly glued, but the back of the book had a big crease:

So I guess I'll have to return both.

I'll look into finding an older used copy that wasn't print-on-demand. But then of course the author won't get any money.

Amazon, and Addison-Wesley, this is shameful. This book costs $50, even with an Amazon discount. I will be sending a note to the author. I'm not sure there is much he can do, but readers should not tolerate garbage like this. Amazon, and Addison-Wesley, fix this! As Amazon approaches total market dominance, I'm reminded of the old Saturday Night Live parody of Bell Telephone: "We don't care. We don't have to. We're the Book Company."

Thursday, August 01, 2013

Arduino, Day 1

A friend of mine sent me a RedBoard and asked me to collaborate with him on a development idea. So I'm playing with an Arduino-compatible device for the first time. I've been aware of them, but just never got one, in part because after writing embedded code all day, what I've wanted to do with my time off is not necessarily write more embedded code.

I downloaded the Arduino IDE and checked that out a bit. There are some things about the way it's presented that drive me a little batty. The language is C++, but Arduino calls it the "Arduino Programming Language" -- it even has its own language reference page. Down at the bottom the fine print says "The Arduino language is based on C/C++."

That repels me. First, it seems to give the Arduino team credit for creating something that they really haven't. They deserve plenty of credit -- not least for building a very useful library -- but not for inventing a programming language. Second, it fails to give credit (and blame) for the language to the large number of people who actually designed and implemented C, C++, and the GCC cross-compiler running behind the scenes, with its reduced standard libraries and all. And third, it obfuscates what programmers are learning -- especially the distinction between a language and a library. That might keep things simpler for beginners but this is supposed to be a teaching tool, isn't it? I don't think it's a good idea to obfuscate the difference between the core language (for example, bitwise and arithmetic operators), macros (like min), and functions in the standard Arduino library. For one thing, errors in using each of these will result in profoundly different kinds of diagnostic messages or other failure modes. It also obfuscates something important -- which C++ is this? Because C++ has many variations now. Can I use enum classes or other C++11 features? I don't know, and because of the facade that Arduino is a distinct language, it is harder to find out. They even have the gall to list true and false as constants. If there's one thing C and C++ programmers know, and beginners need to learn quickly, it's that logical truth in C and C++ is messy. I would hate to have to explain to a beginner why testing a masked bit that is not equal to one against true does not give the expected result.

Anyway, all that aside, this is C++ where the IDE does a few hidden things for you when you compile your code. It inserts a standard header, Arduino.h. It links you to a standard main(). I guess that's all helpful. But finally, it generates prototypes for your functions. That implies a parsing stage, via a separate tool that is not a C++ compiler.

On my Mac Pro running Mountain Lion, the board was not recognized as a serial device at all, so I had to give up using my Mac, at least until I can resolve that. I switched over to Ubuntu 12.04 on a ThinkPad laptop. The IDE works flawlessly. I tried to follow some directions to see where the code was actually built by engaging a verbose mode for compilation and uploading, but I couldn't get that working. So I ditched the IDE.

This was fairly easy, with the caveat that there are a bunch of outdated tools out there. I went down some dead ends and rabbit holes, but the procedure is really not hard. I used sudo apt-get install to install arduino-core and arduino-mk.

There is now a common Arduino.mk makefile in my /usr/share/arduino directory and I can make project folders with makefiles that refer to it. To make this work I had to add a new export to my .bashrc file, export ARDUINO_DIR=/usr/share/arduino (your mileage may vary depending on how your Linux version works, but that's where I define additional environment variables).

The Makefile in my project directory has the following in it:

BOARD_TAG    = uno
ARDUINO_PORT = /dev/serial/by-id/usb-*
include /usr/share/arduino/Arduino.mk
And nothing else! Everything else is inherited from the common Arduino.mk. I can throw .cpp and .h files in there and make builds them and make upload uploads them.

If you have trouble with the upload, you might take a look at your devices. A little experimentation (listing the contents of /dev before and after unpluging the board) reveals that the RedBoard is showing up on my system as a device under /dev/serial -- in my case, /dev/serial/by-id/usb-FTDI_FT232R_USB_UART_A601EGHT-if00-port0 and /dev/serial/by-path/pci-0000:00:1d.0-usb-0:2:1.0-port0 (your values will no doubt vary). That's why my Makefile reads ARDUINO_PORT = /dev/serial/by-id/usb-* -- so it will catch anything that shows up in there with the usb- prefix. If your device is showing up elsewhere, or you have more than one device, you might need to tweak this to properly identify your board.

When you look at the basic blink demo program in the Arduino IDE, you see this, the contents of an .ino file (I have removed some comments):

int led = 13;

void setup() {                
    // initialize the digital pin as an output.
    pinMode(led, OUTPUT);     
}

// the loop routine runs over and over again forever:
void loop() {
    digitalWrite(led, HIGH);   // turn the LED on (HIGH is the voltage level)
    delay(1000);               // wait for a second
    digitalWrite(led, LOW);    // turn the LED off by making the voltage LOW
    delay(1000);               // wait for a second
}

The Makefile knows how to build an .ino file and inserts the necessary header, implementation of main, and generates any necessary prototypes. But if you want to build this code with make as a .cpp file, it needs to look like this:

#include <Arduino.h>

int led = 13;

void setup() {
    // initialize the digital pin as an output.
    pinMode(led, OUTPUT);
}

// the loop routine runs over and over again forever:
void loop() {
    digitalWrite(led, HIGH);   // turn the LED on (HIGH is the voltage level)
    delay(1000);               // wait for a second
    digitalWrite(led, LOW);    // turn the LED off by making the voltage LOW
    delay(1000);               // wait for a second
}

int main(void)
{
    init();

#if defined(USBCON)
    USBDevice.attach();
#endif

    setup();

    for (;;) {
        loop();
        if (serialEventRun) serialEventRun();
    }

return 0;

}

And there it is -- C++, make, and no IDE. Relaxen and watchen Das blinkenlights!

Tuesday, July 30, 2013

Lexx is Wretched

I have a fondness for science fiction series that are imaginative but not, as a whole, successful. Farscape, I'm talking about you. Even, occasionally, those that start out promising, but which turn into complete failures -- failure can occasionally be interesting. At least, it serves as an object lesson for how a story line can go so very far wrong. Andromeda, I've got your number. I can deal with very dated CGI -- Babylon Five is still generally good and often great. So I happened to come across discounted boxed sets of Lexx, the whole series, at my local Target store. They were dirt cheap. "How bad could it be?" I thought. Well, now I know. At least, I know part of the story.

First off, Lexx is not something I can show my kids -- pretty much at all. Season 1 has a surprising amount of very fake gore in it -- brains and guts flying everywhere. That didn't really bother them -- I think they got that the brains were made of gelatin -- but it was getting to me. Watching characters carved up by rotating blades, repeatedly; watching characters getting their brains removed -- that got old. Body horror, body transformation -- pretty standard stuff for B grade science fiction, or anything that partakes of the tropes of such, but not actually kid-friendly. So we didn't continue showing the kids.

Still, I thought it might make more sense to watch them in order, so I watched the second two-hour movie (1:38 without commercials). The second one has full frontal nudity, which startled me a bit. I'm not really opposed to looking at a nubile young woman, per se. There is some imaginative world-building and character creation here, but ultimately it's just incredibly boring. It's like the producers shot the material, not having any idea how long the finished product would be; they shot enough scenes to actually power an hour show (forty-plus minutes without commercials), but also shot a bunch of extended padding sequences, "just in case." And so after a repeated intro that lasts just under four minutes, we get a two-hour show with endless cuts to spinning blades slowly approaching female groins, huge needles slowly approaching male groins, countdown timers counting down, getting stopped, getting started, getting stopped... endless fight scenes, endless scenes of the robot head blathering his love poetry, a ridiculous new character eating fistfuls of brains... et cetera, et cetera, et cetera.

Every time something happens, I'd get my hopes up, thinking that maybe the writing has actually improved, but then it's time to slow down the show again, because we've still got an extra hour and twenty minutes to pad. And it's all distressingly sexist and grotesquely homophobic. Again, I'd be lying if I said that I didn't like to look at Eva Habermann in a miniskirt, but given that the actress is actually young enough to be my daughter, and especially given that she has so little interesting to do, and there's just not much character in her character -- it's -- well, "gratuitous" doesn't even begin to cover it. She's young, but Brian Downey was old enough to know better. And let's just say I'm a little disgusted with the choices the show's producers made. The guest stars in Season 1 are like a who-used-to-be-who of B actors -- Tim Curry, Rutger Hauer, Malcom McDowell. There's material here for a great cult show -- but these episodes are mostly just tedious. They're actually not good enough to be cult classics.

The season consists of four two-hour movies. After watching the first movie, I didn't quite realize all four season one movies were on one disc, so when I tried to watch some more, I put in the first disc of season two by mistake. I watched the first few episodes of season two -- these are shorter. I didn't notice any actual continuity issues. In other words, nothing significant changes from the pilot movie to the start of season two. There are some imaginative satirical elements. Season 2, episode 3 introduces a planet called "Potatohoe" which is a pretty funny satire of the American "right stuff" tropes. But it's too little, and it amounts to too little, amidst the tedious general adolescent sex romp. Then we lose Eva Habermann, who was 90% of the reason I even watched the show this far. I'm honestly not sure if I can watch any more.

It doesn't help that several of the discs skip a lot. It might have something to do with the scratches that were on the discs when I took them out of the packaging, which come from the fact that the discs are all stuck together on a single spindle in the plastic box. And the discs themselves are all unmarked, identifiable only by an ID number, not any kind of label indicating which part of which season they hold -- so good luck pulling out the one you want.

I'm told the later seasons have some very imaginative story lines. People say good things about the third season. It seems like the universe has a lot of potential. Is it worth continuing, or am I going to be in old Battlestar Galactica's second season territory?

UPDATE: I have continued skimming the show. The scripts seem to get somewhat more interesting around season 2, episode 5, called "Lafftrak." It finally seems to take its darkness seriously enough to do something interesting with it, and not just devolve to pornographic settings. The pacing is still weak, but the shows start to feel as if they have a little bit of forward momentum. Of course, then in the next episode, we're back to Star Whores and torture pr0n...

Wednesday, July 24, 2013

The Situation (Day 135)

So, it's day 135. This is either the last covered week (week 20) of unemployment benefits, or I have three more; I'm not quite sure. Without a new source of income, we will run out of money to cover mortgage payments either at the end of September or the end of October. We have burned through the money I withdrew from my 401K in March when I was laid off. I've been selling some possessions, guitars and music gear, but this is demoralizing, and not sustainable. We don't have much more that is worth selling.

I was fortunate to have a 401K to cash out, and to get the food and unemployment benefits I've gotten -- so far I have been able to pay every bill on time and my credit rating is, so far, completely unscathed. But winter is coming. And another son is coming -- Benjamin Merry Potts, most likely around the middle of October.

Emotionally, the situation is very confusing. On the one hand, I have several very promising job prospects, and I'm getting second phone interviews. But these are primarily for jobs where I'd have to relocate, and a small number of possible jobs that might allow me to work from home. This includes positions in Manhattan and Maine. We're coming to grips with the fact that we will most likely have to leave Saginaw. It's a well-worn path out of Saginaw. We were hoping to stick with the road less traveled, but we can't fight economic reality single-handed. And we don't really have any interest in relocating within Michigan, again. If we're going to have to move, let's move somewhere where we won't have to move again -- someplace where, if I lose one job, there's a good chance I can quickly find another.

So, we are willing to relocate, for the right job in the right place. The right place would be the New England area -- Grace is fed up here, and I am too. Maine, Vermont, New Hampshire, Massachusetts, Connecticut, New York, or eastern Pennsylvania are all appealing. but it would not be a quick and easy process. It would probably involve a long separation from my family. I don't relish that idea, especially if my wife has a new baby. That might be what it takes, though. I'll do it for the right job and the right salary and the right place. In any case, we can't move with either a very pregnant woman or a newborn. It's would not be a quick and easy process to sell, or even rent out, a house. A benefit to a permanent job in Manhattan is that it would pay a wage that is scaled for the cost of living there. It might be perfectly doable for me to find as cheap a living arrangement there as I can, work there, and send money home. A Manhattan salary would go a long way towards maintaining a household in Michigan, and helping us figure out how to relocate, and I'd probably be able to fly home fairly frequently.

I would consider a short-term remote contract job where I wasn't an employee, and didn't get benefits, and earned just an hourly wage. Let's say it was a four-hour drive away. I'd consider living away from home during the work week, staying in an extended-stay motel, and driving home on weekends. But it would have to pay well enough to be able to do that commute, pay for that hotel, and be able to send money home -- enough to pay the mortgage and bills. A per diem would help, but the contract work like this I've seen won't cover a per diem. We'd need to maintain two cars instead of one. Grace would need to hire some people for housekeeping and child care help. I wouldn't be there to spend the time I normally spend doing basic household chores and helping to take care of the kids.

Would I consider a contract job like that father away -- for example, an hourly job in California? That's tougher. I think I could tolerate seeing my wife and kids only on weekends, if I knew that situation would not continue indefinitely. But if I had to fly out, that probably wouldn't be possible. California has very little in the way of public transportation. Would I have to lease a car out there, so I could drive to a job? Take cabs? It might make more sense to buy a used car, once out there. In any case, it would cost. Paying for the flights, the hotel, and the car, with no per diem, it's hard to imagine that I'd be able to fly home even once a month. Would I do a job like that if I could only manage to see my family, say, quarterly? Let's just say that would be a hardship. I would consider an arrangement like this if it paid enough. But the recruiters who are talking to me about these jobs are not offering competitive market rates. It doesn't seem like the numbers could work out -- I can't take a job that won't actually pay all our expenses.

The prospect of employment locally or within an hour commute continues to look very poor. I've applied for a number of much lower-paying IT or programming jobs in the region, and been consistently rejected. These jobs wouldn't pay enough to afford a long commute or maintain any financial security at all. In fact, I think we'd still be eligible for food stamps (SNAP) and my wife and kids would probably still be eligible for Medicaid. Their only saving grace is that they would pay the mortgage. Some of them might provide health insurance, at least for me. But I've seen nothing but a string of form rejections for these positions.

Grace and I don't get much quiet time -- we haven't had an actual date night, or an evening without the kids, since March. The closest we come is getting a sitter to watch the kids for a couple of hours while we run some errands. That's what we did last Sunday. I made a recording and turned it into a podcast. You can listen if you are interested.

Building a Podcast Feed File, for Beginners

I had a question about how to set up a podcast. I wrote this answer and thought while I was at it, I might as well polish up the answer just a bit and post it, in case it would be helpful to anyone else.

I'm starting a podcast and I need help creating an RSS feed. You're the only person I could think of that might know how to create such a thing. Is there any way you could help me?

OK, I am not an expert on podcasts in general because I've only every created mine. I set mine up by hand. I'll tell you how I do that and then you can try it that way if you want. You might prefer to use a web site that does the technical parts for you.

A podcast just consists of audio files that can be downloaded, and the feed file. I write my feed files by hand. I just have a hosting site at DreamHost that gives me FTP access, and I upload audio files to a directory that is under the root of one of my hosted web site directories. For example: http://thepottshouse.org/pottscasts/gpp/

The feed file I use, I write with a text editor. I use BBEdit, which is a fantastic text editor for the Macintosh that I've used for over 20 years, but any text editor will do. For the General Purpose Podcast, this is the feed file: http://thepottshouse.org/pottscasts/gpp/index.xml

The feed file contains information about the podcast feed as a whole, and then a series of entries, one for each episode (in my case, each audio file, although they don't strictly have to be audio files; you can use video files). When I add an audio file, I just add a new entry that describes the new audio file.

This is a slight simplification. I actually use a separate "staging" file for testing before I add entries to the main podcast feed. The staging file contains the last few episodes, and I have a separate subscription in iTunes to the "staging" podcast for testing purposes. When I upload a new episode MP3 file, I test it by adding an entry to the staging index file here: http://thepottshouse.org/pottscasts/gpp/index_staging.xml

So I add an entry to test, and then tell iTunes to update the staging podcast. If it works OK and finds a new episode, downloads it, and it comes out to the right length, and the tags look OK, then I add the same entry to the main index file.

I have a blog for the podcast too. That's a separate thing on Blogger, here: http://generalpurposepodcast.blogspot.com That just provides a jumping-off point to get to the episodes, and something I can post on Facebook or Twitter. For each episode I just make a new blog post and write a description and then include a link to the particular MP3 file. The blog in the sidebar also has links to the feeds and to the iTunes store page for the podcast. I'll get to the iTunes store in a minute.

Oh, writing the entry in the feed file is kind of a pain. You have to specify a date, and it has to be formatted correctly and it has to have the right GMT offset which changes with daylight savings time. You have to specify the exact number of bytes in the file and the length in hours, minutes, and seconds. If you get these wrong the file will not be downloaded correctly -- it will be cut off. The URL needs to be URL-escaped, for example spaces become %20, etc.

If I upload the file to my hosting site first, so that I can see the file in my web browser, and copy the link, it comes out URL-escaped for me, so that part is easy. I paste that link to the file into the feed file entry for the episode. The entry gets a link to the file, and then there is a also a UID (a unique ID for the episode). Personally, I use the same thing for both the UID and the link, but they can be different. The UID is how iTunes (or some other podcast reader) decides, when it reads your feed file, whether it has downloaded that file already, or whether it needs to download it again. So it's important to come up with a scheme for UIDs and then never change them, or anyone who subscribes to your podcast will probably either see errors or get duplicated files. In other words, even if I moved the podcast files to a different server, and the link needed to be changed, I would not change the UIDs of any of the existing entries.

Once you have your feed file, you can check it with the feed validator -- and you definitely should do this before giving it out in public or submitting it to the iTunes store. See http://feedvalidator.org I try to remember to check mine every so often just to make sure I don't have an invalid date or something like that. If the feed is not working, this thing might tell you why.

OK, the next thing is iTunes integration. The thing to keep in mind here is that Apple does not host any of your files or your feed. You apply to be in the podcast directory, and then someone approves it, and the system generates a page for you on Apple's site. Once a day or so it reads your feed file and updates that page. The point here is that if someone is having problems with your page on iTunes, it is probably not Apple's fault, it is probably a problem with your feed or your hosted audio files.

If you don't want to do this all manually there are sites that will set up your feed for you automatically, like libsyn.com and podbean.com. I am not sure which one is best and I have not used them.

This is Apple's guide that includes information on how to tag your files in the feed -- you could start out with mine as an example, but this is the de facto standard for writing a podcast feed that will work with iTunes and the iTunes store: http://www.apple.com/itunes/podcasts/specs.html

OK, now you know just about everything I know about it. Oh, there is one more thing to talk about. This part is kind of critical.

So you create an audio file -- I make a WAV file and then encode it into an MP3 file either in Logic or in iTunes. My recent spoken word files are encoded at 128 Kbps; if I'm including music I would use a higher bit rate. Some people compress them much smaller, but I am a sticker about audio quality and 128 Kbps is about as much compression as I can tolerate.

You then have to tag it. This actually changes data fields in your MP3 file. The tagging should be consistent. You can see how my files look in iTunes. If the tagging is not consistent then the files will not sort properly -- they won't group into albums or sort by artist and that is a huge pain. When files get scattered all over your iTunes library, it looks very unprofessional and I tend to delete those podcasts. But note that the tags you add are not quite as relevant as they would be if you were releasing an album of MP3 files, and here's why -- podcasts have additional tags that are added by your "podcatcher" -- iTunes, or some other program that downloads the podcast files.

So you tag your MP3 file, and take note of the length (the exact length in bytes and the length in hours, minutes, and seconds), so that you can make a correct entry in your feed file. The MP3 file is the file you upload, but note that this file is not actually a podcast file yet. It doesn't show up in "Podcasts" under iTunes. It becomes a podcast file when iTunes or some other podcatcher downloads it. iTunes reads the metadata from the feed file (metadata is data about a file that is not in the file itself) and it uses parts of that metadata, like the podcast name, to adds hidden tags to the MP3 file. Yes, it changes the file -- the MP3 file on your hard drive that is downloaded will not be exactly the same file you put on the server. This is confusing. But it explains why if you download the MP3 file directly and put it in your iTunes library, rather than letting iTunes download it as a podcast episode, it won't "sort" -- that is, it won't show up as an iTunes podcast under the podcast name.

At least, that has been true in the past. I think recent versions of iTunes have finally made it so there is an "advanced" panel that will let you tell iTunes that a file is a podcast file, but sorting it into the proper podcast this way might still be tricky. So the key thing is that you might want to keep both your properly tagged source files, because those are the ones you would upload to your site if, for example, your site lost all its files, or if you were going to relocate your site to a new web server, and also the files after they have been downloaded and tagged as podcasts by iTunes. I keep them separately. If someone is missing an episode I can send them the podcast tagged file and they can add it to their iTunes library and it will sort correctly with the other podcast files.

OK, now you pretty much know everything I know about podcast feeds. I prefer doing it by hand because I'm a control freak -- I like to know exactly what is happening. I like to tag my files exactly the way I want. But if you're not into that -- if you don't know how to upload and download files of various kinds and tag MP3 files, for example -- you probably want to use something like Libsyn. Or maybe you know what to do but just want to save time. I just know that I've sometimes been called on to help people using these services fix their feeds after they are broken, or they need to relocate files, and it isn't pretty, so I'll stick to my hand-rolled feed.

Monday, July 08, 2013

Building Repast HPC on Mountain Lion

For a possible small consulting project, I've built Repast HPC on Mountain Lion and I'm making notes available here, since the build was not simple.

First, I needed the hdf5 library. I used hdf5-1.8.11 from the .tar.gz. This has to be built using ./configure --prefix=/usr/local/ (or somewhere else if you are doing something different to manage user-built programs). I was then able to run sudo make, sudo make check, sudo make install, and sudo make check-install and that all seemed to work fine (although the tests take quite a while, even on my 8-core Mac Pro).

Next, I needed to install netcdf. I went down a versioning rabbit hole for a number of hours with 4.3.0... I was _not_ able to get it to work! Use 4.2.1.1. ./configure --prefix=/usr/local, make, make check, sudo make install.

Next, the netcdf-cxx, the C++ version. I used netcdf-cxx-4.2 -- NOT netcdf-cxx4-4.2 -- with ./configure --prefix=/usr/local/

Similarly, boost 1.54 had all kinds of problems. I had to use boost 1.48. ./bootstrap.sh --prefix=/usr/local and sudo ./b2 ... the build process is extremely time consuming, and I had to manually install both the boost headers and the compiled libraries.

Next, openmp1 1.6.0 -- NOT 1.6.5. ./configure --prefix=/usr/local/ seemed to go OK, although it seems to run recursively on sub-projects, so it takes a long time, and creates hundreds of makefiles. Wow. Then sudo make install... so much stuff. My 8 cores are not really that much help, and don't seem to be busy enough. Maybe an SSD would help keep them stuffed. Where's my 2013 Mac Pro "space heater" edition, with a terabyte SSD? (Maybe when I get some income again...)

Finally, ./configure --prefix=/usr/local/ in repasthps-1.0.1, and make succeeded. After about 4 hours of messing around with broken builds. I had a lot with build issues for individual components and final problems with Repast HPC itself despite everything else building successfully, before I finally found this e-mail message chain that had some details about the API changes between different versions, and laid out a workable set of libraries:

http://repast.10935.n7.nabble.com/Installing-RepastHPC-on-Mac-Can-I-Install-Prerequisite-Libraries-with-MacPort-td8293.html

They suggest that these versions work:

drwxr-xr-x@ 27 markehlen  staff    918 Aug 21 19:14 boost_1_48_0 
drwxr-xr-x@ 54 markehlen  staff   1836 Aug 21 19:19 netcdf-4.2.1.1 
drwxr-xr-x@ 26 markehlen  staff    884 Aug 21 19:20 netcdf-cxx-4.2 
drwxr-xr-x@ 30 markehlen  staff   1020 Aug 21 19:04 openmpi-1.6 
drwxr-xr-x@ 31 markehlen  staff   1054 Aug 21 19:28 repasthpc-1.0.1

And that combination did seem to work for me. I was able to run the samples (after changing some directory permissions) with:

mpirun -np 4 ./zombie_model config.props model.props
mpirun -np 6 ./rumor_model config.props model.props

---

Notes on building Boost 1.54: doing a full build yielded some failures, with those megabyte-long C++ template error messages. I had to build individual libraries. The build process doesn't seem to honor the prefix and won't install libraries anywhere but a stage directory in the source tree. I had to manually copy files from stage/lib into /user/local/lib and manually copy the boost headers. There is an issue with building mpi, too:

./bootstrap.sh --prefix=/usr/local/ --with-libraries=mpi --show-libraries

sudo ./b2

only works properly if I first put a user-config.jam file in my home directory containing "using mpi ;" Then I have to manually copy the boost mpi library.

Notes on bilding netcdf-cxx4-4.2: I had to use sudo make and sudo make install since it seems to write build products into /usr/local/ even before doing make install (!)

Sunday, July 07, 2013

Are You Experienced?

A recruiter recently asked me to answer some questions for a client, so I did. I thought it might be worthwhile to save the questions and my answers and make them public so that I can refer people to them.

How many years of C++ experience do you have & have you worked with C++ during the last 2 years?

I've been using both C and C++ since before the C89/C90 standard and well before the C++98 standard. I taught myself C programming in college -- I did not learn it in a class. I initially used C++ when there were various sets of object-oriented extensions to C like THINK C and Microsoft's "structured exception handling" for Windows NT 3.5.

It's hard to provide an exact "number of years. At some positions I worked more in plain old C, or Java, or NewtonScript or other languages, but even in those jobs there were often times where I was working with small C and/or C++ projects on the side.

I own copies of the ISO standards for C and C++ (C90, C99, and C++03) and used to study them for my own edification, so that I could write more portable code. I used to subscribe to the C++ Report magazine. I used to write C and C++ interview test questions for screening people at the University of Michigan. I own dozens of books on C++ and have studied them extensively. I was definitely a C++ expert, although I was much more of an expert on C++03 than C++11. I am not so interested in the "cutting edge" of C++ these days (see below for notes about STL and C++11/C++0x). For example, here's a blog post I wrote about the C++ feature "pointers to member functions," in 2006:

http://praisecurseandrecurse.blogspot.com/2006/08/generic-functions-and-pointers-to.html

I have used the following compilers and frameworks for paid work (off the top of my head, these are the major tools I've used, and I am probably forgetting some):

THINK C / Think Class Library

MPW C/C++

Borland C++ with the Object Windows Library and TurboVision for MS-DOS

Microsoft Visual C++ starting with 1.0 / MFC

CodeWarrior / PowerPlant class library and Qt

XCode (aka ProjectBuilder) / CoreAudio

GCC ("g++") / Lectronix AFrame library

TI Code Composer Studio

In addition, I have some experience with static checkers (Lint, Understand for C/C++, QAC, etc. -- more are mentioned on my resume.) and I would say they are a must for large commercial code bases. Also, I have worked with profilers, various run-time debuggers, and tools such as valgrind -- these are incredibly useful and helpful in finding bugs, especially in the use of uninitialized memory.

So, how do you put that in an exact number? I'd say I've used C++ daily for perhaps 12 years, but even when I was not using C++ as my primary development language for a given job or set of projects, I used it at least a little bit every year for the last 24 years. So somewhere in between those numbers.

In the Last Two Years

Yes, the most recent project was a server for Lectronix, something called the PTT Server, that sits on top of the AFrame framework and receives requests via JSON-RPC, and manages the state of all the discrete IO in the system. It is a multi-threaded application using message queues and hierarchical state machines. The server is not very big, maybe 7,500 lines of code, and the top layer of it is actually generated by some internal Python scripts. During this period, I was also maintaining and adding new features to several other servers and drivers as needed.

If the client wants to know whether I am familiar with C++11/C++0x, the answer is "not very much." I have not studied the C++ 11 changes very much yet, so I am only slightly familiar with features like enum classes and lambdas. At Lectronix, we chose not to try to adopt new features for an existing multi-million line code base, and we stuck with slightly older, well-tested versions of our compilers. I have definitely used STL, but we do not use it heavily in embedded projects, because of a conservative attitude towards memory use and hidden costs. We also tend to avoid things like and multiple inheritance in embedded programming, although I have used these features in the past. We tend to deliberately use a conservative subset of C++.

While I consider myself an expert on C++, it is not the be-all, end-all of programming languages. Learning other languages has made me a much better programmer and able to see problems from different perspectives. For example, I have on several occasions prototyped designs in other languages, for example Dylan or Haskell, to refine a _design_, and then ported the design to C++ to produce the shipping product.

I believe the industry is gradually moving towards functional programing, and languages such as Scala (that runs on the JVM) or Haskell (which can now generate code for ARM on "bare metal"), and embeddable scripting languages on top of C/C++ for configurability (for example, Lua on top of a back end written in C++ is the structure of most high-performance commercial video games). Even sticking with plain C++, there is no denying that Clang/LLVM are very promising developments -- Clang has the best error-checking and static analysis I've seen so far for C++, and for Objective-C this static analysis has allowed a feature called ARC -- automatic reference counting, which is basically garbage collection without having a separate background task that can create "pauses."

I have a strong interest in figuring out how using tools such as these, to make a business more competitive, specifically reducing line counts and bug counts and improving time to market. If the client is not interested in any of that, I'm probably not actually going to be the best fit for them, since they will not be making maximum use of my skills. I see myself as a full-stack software developer who should be able to choose the best tools for the job, not strictly as a C++ programmer.

What recent steps have you taken to improve your skills as a software developer?

Recently while still at Lectronix I was asked to re-implement some logic for handling the "PTT" (push to talk) signals for police radios, microphones, and hand controllers. My supervisor wanted me to use a library for handling HSM (hierarchical state machine) designs. I had never worked with hierarchical state machines, just flat state machines, so this was a little bit of a challenge. My first drafts of the state machines were not very good, but I arranged to meet with some other developers in my team who had more experience with HSM designs and improve them. After a couple of revisions I got the state machines up and running, they were simpler than the original design, they passed all my testing and all the bench testing, and we shipped them in a prototype for a police motorcycle product. So I now feel that I understand the basics of using hierarchical state machines as a design tool.

While unemployed, and although the job search itself takes up a great deal of time, I have been working on teaching myself Objective-C programming, something I've wanted to learn for a long time, and the basics of Apple's iOS framework for developing iPhone and iPad applications. My goal is to get a simple game up and running and available in the app store as a small demonstration project to show potential employers. Even if the first version is not sophisticated it should prove that I can build on those skills. I am doing this work "in public" -- sharing the code on github and writing about the design experiments and trade-offs on my blog. Here is one of my blog posts about the Objective-C implementation of the game:

http://praisecurseandrecurse.blogspot.com/2013/06/objective-c-day-5.html

The latest code is available on GitHub here: https://github.com/paulrpotts/arctic-slide-ios

I am also attempting to master some of the slightly more advanced features of the Haskell programming language, and functional programming in general, on the grounds that I believe that properly using functional languages such as F#, Scala, and Haskell can provide a competitive advantage, and give me the chance to bring that advantage to an employer.

Describe any experience you have in developing desktop applications.

Just to expand on some items from my resume and some that aren't:

In college I worked with a mathematics faculty member to develop an instructional multimedia tool using HyperCard and XCMD plug-ins that I wrote in THINK C for teaching calculus. I developed various other small applications for "classic" MacOS too, including a tool to edit version resources and a startup extension.

At the University of Michigan's Office of Instructional Technology, I built several instructional multimedia applications for students -- based around HyperCard stacks with custom XCMD and XFCN plug-ins written in C, Toolbook programs that integrated content from videodiscs, and a Visual BASIC application that used digital video to teach manufacturing process re-engineering techniques to business school students.

As a side project, I used Borland C++ and the TurboVision for MS-DOS framework, and also Visual BASIC for MS-DOS, to develop a survey application "front end" (to collect data at remote sites) and "back end" (to read the data from discs and aggregate it and display statistics) for the National Science Teachers Association (NSTA).

At Fry Multimedia, I built a prototype, with one other developer, in C++ using the MFC framework, of a Windows application to search a large CD-ROM database of compressed business data called "Calling All Business." This featured a "word wheel" feature that would match entries in the database and display matches while the user typed in search strings.

At the University of Michigan Medical Center I wrote, among other things, a Newton application that administered surveys to people in the ER waiting rooms. This was not desktop so much as "palmtop" but the same emphasis on user-centered design was there. I also either wrote entirely, or collaborated with another developer on, several internal applications, such as a Macintosh application written in C++ to upload data from the Newton devices, an application written using Metrowerks CodeWarrior (in C++ using the PowerPlant framework) to use AppleEvents to control Quark XPress in order to print batches of customized newsletters while providing text-to-speech feedback.

At Aardvark Computer Systems I completely rewrote the GUI application for controlling the company's flagship sound card product, the Aardvark Direct Pro Q10. This featured a mixer interface with knobs and sliders and animated meters to display audio input and output levels on all channels, persistent storage of mixer settings, and was built using C++ and the Qt framework. I also ran the beta-test program for this software.

At Lectronix, my work did not focus on desktop applications but I was able to occasionally contribute code to the Infotainment system GUIs, written in C++ using the Qt framework.

Describe any experience you have in developing server-side applications.

The bulk of my work at InterConnect was revisions to a server application written in Java that ran on Sun machines, and parsed "page collections" (bundles of scanned page images) along with metadata, a combination of XML including Library of Congress subject heading data and MARC records, to populate Oracle databases. These were large collections (terabytes, and that the time that was an unusually large amount of data to put into a web application). The data was housed on the client's EMC storage RAID arrays (at the time, very high-end systems). A full run of the program to populate a "page collection" would take several days. I worked with the client's Oracle team to debug issues with their database, particularly stored procedures written in PL/SQL, and with their production team to try to determine the best strategies for data issues; I wrote code to "clean" this data). The client was ProQuest (formerly University Microfilms International and Bell and Howell Information and Learning), and I worked specifically on the back-end for the Gerritsen Women's History collection and Genealogy and Family History collection. When InterConnect handed over development to ProQuest's internal team I wrote documentation on the import process and gave a presentation to explain it to their team.

Much of my work at Lectronix was also server-side applications, in the sense that all the code on products like the Rockwell iForce system was divided into drivers, servers, clients, and GUI code. Servers interact with clients and other servers using a network sockets interface wrapped in the Lecronix proprietary framework. So, for example, the Audio Zone Manager (AZM) server receives all requests as remote procedure calls and handles multiple clients. For some complex tasks like "priority audio" text-to-speech prompts it sets up a session where a client requests a session, the AZM lowers the level of "foreground" audio such as FM radio, the requesting client is granted a token, and then must make "keepalive" messages using the token, in order to keep the priority audio active. Multiple clients can request priority audio using different priority levels and the AZM must be able to handle requests that are "immediate" (only valid now) or requests which can be deferred, queue up these requests, and manage termination of expired priority audio sessions if a client process fails to send "keepalive" messages.

The more recent PTT server has a similar, multi-threaded design, where multiple instances of hierarchical state machines are fed messages via a serializing message queue, and there were various APIs to drivers that the code called, some that returned immediately, and some which blocked and returned only when a new state was available from the DSP (for example).

These are two examples; depending on what is meant, some other things I've worked on might qualify. For example, applications that support AppleEvents, wait for serial data, or run as "daemons" handling audio transfer between a driver and user-space applications on MacOS X, or run as interrupt-driven tasks to mix audio data.

Describe any experience you have in developing web applications.

I am not familiar with Microsoft web frameworks like ASP.NET so if this employer is looking for someone who would "hit the ground running" to develop web applications using that framework, I'm not that guy. I would be willing to learn, though, and I think I have a track record that indicates that I could.

I am not a database guru -- I have worked with databases and solved problems with databases, and I can understand basic database queries and the basics of database design but I am not an expert on (for example) query optimization for SQL Server; again, that is something I'd have to spend time learning.

I have not recently developed web applications using stacks such as LAMP (Linux, Apache, MySQL, PHP) or Ruby on Rails. However, I did work on several early web applications using Perl CGI scripts and plain old HTML -- for example, at Fry Multimedia I developed the web site for the Association of American Publishers. I was a beta-tester for Java before the 1.0 release and wrote early experimental "applets."

At the University of Michigan I used Apple's WebObjects (Java, with the WebObjects framework) to port my design for the Apple Newton survey engine to the web.

Later, while at InterConnect, I did some work (fixing bugs) on InterConnect's Perl and Java framework for "front-end" web sites -- the engine that generated HTML from database queries and templates, and made fixes to the web applications themselves in Perl, although this work was not my primary focus (see above).

I can solve basic Apache configuration issues and I've done small projects to set up, for example, a personal Wiki written in Haskell on an Ubuntu server. For example, I had to do some problem-solving to get this functioning under Mountain Lion (MacOS X 10.8). I wrote recently about this here:

http://geeklikemetoo.blogspot.com/2012/07/apple-breaks-my-gitit-wiki-under.html

What development project have you enjoyed the most? Why?

I have enjoyed a lot of them, but I'll give you one specific example. Back at the Office of Instructional Technology at the University of Michigan I worked with a faculty member in the School of Nursing to develop a program to teach nursing students about the side effects of antipsychotic medications. For this project we hired an actor to act out various horrible side effects, from drowsiness to shuffling to a seizure, and videotaped him. I enjoyed this project a lot because I got to collaborate with several people, including Dr. Reg Williams. I got to have creative input at all levels, from conception to final development -- for example, while developing the ToolBook application, I added animations to show how neurotransmitters move across synapses. I learned a number of new skills, including how to light a video shoot and the use of a non-linear video-editing equipment (Avid Composer), and I got to see the final system in use and receive positive feedback.

So I would say that what I liked most about that project was (1) being involved in the design and implementation at all stages, (2) the chance to work with some very collegial and welcoming people, and (3) being able to learn several new skills, and (4) being able to "close the loop" and see how the final product was received by the customers (in this case, faculty and students). I have since then worked in many development situations where different parts of that process were lacking or non-existent and they often make projects less enjoyable.

Here's a "runner-up." In 2000 I did some consulting work for Aardvark Computer Systems, assisting them with getting their flagship audio card working on Macintosh systems. Where the multimedia application I worked on several years earlier was high-level, this was very low-level: the issues involved included data representation ("big-endian" versus "little-endian"), and the low-level behavior of the PCI bus (allowed "shot size" and retry logic). Debugging this involved working closely with an electrical engineer who set up wires on the board and connected them to logic analyzer probes, and configuring GPIO pins so that we could toggle them to signal where in the DSP code we were executing. This was tedious and fragile -- even while wearing a wrist strap, moving around the office near the desk could produce enough static electricity to cause the computer to reboot. The solution eventually required a very careful re-implementation of the PCI data transfer code using a combination of C and inline Motorola 56301 DSP assembly language. I had to pull out a lot of very low-level tricks here, paying close attention to the run-time of individual assembly instructions from the data sheet, borrowing bits from registers to count partially completed buffers, studying chip errata and documentation errors in the data sheet, and dealing with a compiler that did not properly support ISO standard C and would let you know this by, for example, crashing when it saw a variable declared const. This also was very enjoyable for the chance to work very close to the hardware, learning new things, solving a difficult problem, working in close collaboration with a talented engineer, and getting the chance to actually ship the solution we came up with. In retrospect we forget the tedium and bleary eyes and remember the success.

The Situation (Day 118)

So. Day 118 of unemployment. Almost four months. It's getting hard to stay positive and keep anxiety at bay. Here's what's going on.

It might sound hopelessly naive, but I didn't think it would be this hard to find another job. I know I've been quite fortunate in some ways with respect to my career -- being very into, and good at, computer programming through the '90s and 2000s was a good place to be. I've been out of work, briefly, a few times before, when the small business or startups I worked for shrunk or imploded. but I've never had much difficulty finding my next job, and the job changes have generally been "upgrades" to higher pay, or at least bigger projects and more responsibility.

The job market is certainly bad right now, and especially bad locally. I am trying to both be realistic and optimistic at the same time -- realistically, it seems to be absolutely useless, for the most part, to apply for publicly-posted jobs. I've applied for dozens -- it would have been more, if there were more posted, but while there are a lot of job listings, it doesn't make any sense for me to apply for jobs that will not pay enough to cover our mortgage; if I got one, we'd still have to move. And we are still trying to figure out how to avoid that, so that we don't lose everything we've put into our house.

Working with recruiters has been an overwhelmingly negative experience as well, although there have been a few bright spots that have led to good leads and interviews. I'm really fed up with applying for a job listing for Saginaw or Flint only to find out that I'm actually contacting a recruiter about a position in Alabama or Mississippi or Florida. I've talked to recruiters at length who, it turned out, didn't even know the company they were recruiting for, because they were actually working for another recruiter. Is there even an actual job posting at the end of that chain of recruiters, or am I putting effort into some kind of scam? I don't know. I've also put a considerable amount of time interviewing for contract positions, making the case that I am a strong candidate, only to be completely low-balled on hourly rate, to the point where it would make no economic sense whatsoever for me to take that job (for example, a six-month C++ programming contract out of state, in Manhattan, for a major bank, where I'm expected to be pleased to accept $50 an hour and no per diem or travel expenses).

My wife suggests that in the market right now, it will basically be impossible to find a job without having a job, except through personal contacts. That's discouraging, but she is probably right. And one difficulty is that I just don't have a lot of personal contacts in the area, since we've only been here three years. I have a few, and they've been trying to help me, but in general the leads (even with referrals from people who already work in the companies) have not yielded much that is promising -- usually a series of web forms where I upload a resume, then describe my work experience again in detail, write and upload a cover letter, fill out an elaborate series of questions -- this can and often does take two or three hours -- and then hear nothing whatsoever about the job again. For most of these, there is no person to contact -- no name, no phone number, no e-mail address. I'm faceless to the company, and they are faceless to me. That's just not a good prospect.

Still, I have a generalized feeling that the right thing will come along, at least for the short term. Essentially, I have to keep believing that. I keep feeling optimistic about particular jobs. But hearing nothing back over and over again for months is starting to wear me down.

The money situation is getting to be difficult. We still have a small positive bank balance, and I've been able to continue to pay for everything we need. Fortunately our consumer debt is very low -- far lower than a lot of American families like ours. But our savings is gone, so from here on out it's either income or selling off things. We are eligible for cash assistance from the state as well, to cover things like diapers -- we will look into that this week.

Unemployment continues to cover our mortgage and taxes, and food benefits are doing quite a good job at covering our food needs. But tomorrow, I will certify for my 17th week of unemployment. I have either 3 or 6 weeks left to collect out of Michigan's maximum of 20. I'm not sure, because the state refused to pay me for 3 weeks, and I'm not sure if those weeks are just lost or if I am still eligible to collect them. I'm hoping so. We've got a situation with some other bills -- things like energy, water, life insurance, and things not covered by our food benefits. Fortunately energy bills were low during the spring and early summer, but we've had to turn on the air conditioning. We use window ventilation fans strategically. We have the AC on, set low, and the furnace fan running continually, and a separate portable AC unit here in my office where it gets too hot to run computers without it. Our next bill is going to be high. I'm guessing the next bill will be $600. I may be able to get that reduced by starting up a budget plan with Consumers Energy.

The dehumidifier in the basement has stopped working, which is a potential mold problem with our things stored down there. We've got to go through some of the remaining boxes. The older of our two furnaces seems to be out of commission again. I had plans to put money into our insulation and HVAC this summer, as well as exterior repainting and a whole lot of minor repair items, but that doesn't really work without income. Fortunately the roof is holding up, as is the new set of gutters we put on last fall.

Oh -- the lead situation. We had the lead inspection -- a very thorough, all-day inspection of our home, inside and outside, and grounds. The inspector did not find anything really hazardous -- there is some old lead paint in baseboards and woodwork in a couple of rooms, but it is under layers of later paint, and it is not peeling. Our dining table, which was an old ping-pong table, had lead paint on it. It's now gone, as are all the kitchen towels we used to use to wipe it. That was about two months ago, and we have still not gotten the written report which is supposed to include the results of the soil samples. Another follow-up phone call is needed. In another month or so we will be getting the children's blood levels tested again, so we should then have a read on whether there is still any kind of daily exposure going on.

Even though we have been making payments via COBRA to continue our dental coverage, several dental bills have been refused, and so we have to straighten that out. Basically, as I'm sure you are aware, health insurance companies are slimy pig-fuckers, and I don't mean that in an affectionate way. We've got some big residual bills -- our four-year-old's dental work cost a lot, even after insurance. We are trying to get the refused bills re-submitted and get whatever issue there is with our dental coverage straightened out. I'm very concerned that something is going to hit my credit rating. So far, we have not failed to meet any of our obligations, but one of these medical providers could decide to sell a debt to a collection agency at any time and that will be a black mark against us. I'm going to pull my reports and try to make sure that hasn't already happened "silently" when an insurance payment was improperly refused.

I've raised a little cash by selling off some of my home studio gear. The Apogee Ensemble audio interface I've used for the last five years to record songs and podcasts is gone. I've sold a number of my guitars, including my Steinberger fretless bass and my Adamas 12-string acoustic guitar. There isn't much left to sell that is worth the effort -- for example, I could only get $75 for a made-in-Japan Jagmaster that I paid $400 for. No one wants to buy a 20" Sony tube TV from 1994 or an electric guitar that needs rewiring work before it can be played. I could start selling our library -- I've had to gut my book collection in the past. I'm really resisting this, though, in part because the return compared to the time and effort put in to do it would be very low -- there are no decent local used book stores that might send a buyer out, so I'd be carting boxes of books down to Ann Arbor -- and in part because I just don't think I can bear giving up a collection of books I've gradually built and shaped and cultivated over the years, in some cases books I've carried around with me since I was a child. Grace and I will do a pass through our possessions trying to find some things that might be easy to turn into cash, but in general we have always lived a very bohemian lifestyle -- furniture from the Goodwill, silverware collected from rummage sales, bookshelves from Ikea; my desk is a door on plastic sawhorses. I'm not going to sell my computers; that would be eating our "seed corn."

I've been reading the recent novels by Alex Hughes about the adventures of an ex-junkie telepath. In them, he has to meet with his Narcotics Anonymous buddy, who asks him each time to list three things he's grateful for. I'm grateful for many things. I'm grateful that there is a safety net, even if unemployment may not last until I get my first paycheck from my next job. I'm grateful for the SNAP program and the WIC program that are pretty much supplying us with all the food we need to feed the family. I'm grateful for our small but supportive group of local friends, and our out-of-town friends. I'm grateful for the anonymous donation of a Meijer gift card sent by a friend of the family. I'm grateful for the handful of decent recruiters who are actually trying to hook me up with a real job for both our benefits. And I'm grateful to my family and especially to my wife who has been very patient with me as I work through this process and all the frustration and anxiety that comes with it. And I'm grateful to you, my online friends, who have been supportive as well.