tag:blogger.com,1999:blog-210541852024-03-07T00:36:12.792-05:00Geek Like Me, TooA middle-aged software curmudgeon's rants, raves, gripes, and prophecies.Paul R. Pottshttp://www.blogger.com/profile/04401509483200614806noreply@blogger.comBlogger452125tag:blogger.com,1999:blog-21054185.post-11281998283857799192017-08-12T13:58:00.000-04:002017-08-12T13:58:02.585-04:00A Workaround for the MP3 Tagging Problem<p>I have found a workaround for the problem I was having. To quickly summarize, I use Logic Pro 9 to export projects to wave files. Then I want to convert them to fully tagged MP3 files.</p>
<p>I've been doing the conversion by hand using iTunes, and tagging in iTunes too. My projects tend to be in higher sample rates and bit depths -- for example, 24-bit, 96KHz. I would use Logic Pro to export the files to MP3 format, and tag them, but I have the following issues:</p>
<ol>
<li>In Logic Pro 9, if I ask Logic to bounce this project to an MP3 file, it does a long series of conversions and the end result is that I get an MP3 file at 48KHz, which is not what I want. There does not seem to be an option to force it to use 44.1KHz.</li>
<li>In Logic Pro 9, you can also ask Logic to tag your file, but if you supply long fields, it will truncate them. I think this is because it supports a somewhat out-of-date version of the ID3 standard, while iTunes supports a later version which supports longer fields.</li>
</ol>
<p>So why not just upgrade to Logic Pro X? Well, first, I'm not sure if this would actually fix the problem. At the moment, money is a bit tight. And I'm recording on a 2009 "3,1" Mac Mini. It works great, although it is slow. I don't think installing Logic Pro X on this machine is likely to make my projects easier. Most likely, Pro X eats considerably more memory and CPU and disc space than Logic Pro 9. So the plan is to make do, as much as possible, with what I have, until I can do some major upgrades, including replacing the computer.</p>
<p>Anyhow, to work around these issues with Logic Pro 9, I've been bouncing to a WAVE file, then bringing the 16-bit, 44.1KHz WAVE file into iTunes, creating an MP3 using iTunes, then tagging it by hand.</p>
<p>I wanted to see if I could do some of these steps on the command line, so I could do it in a script, a BBEdit worksheet, or even a Makefile. I'd like to automate that somewhat, not so much because I'm spending <em>that</em> much time producing podcasts, but because all these steps are error-prone. I'd also like to automate, at least partially, the generation of the entries in the podcast feed file. I'm always screwing up the time zone offset in dates, or forgetting to update the size of the file in bytes. It would be nice to have a script to do the grunt work, especially since I am often making versions to test, before I am happy enough with them to add them to the live podcast feed.</p>
<p>So I was trying to use <em>LAME</em> to do the encoding and tagging, but I discovered that iTunes would not import the "comment" field in the MP3 files created and tagged by <em>LAME</em>.</p>
<p>I asked Dan Benjamin on Twitter, and was happy that he replied, but he just wrote "Why not just bounce correctly from Logic?"</p>
<p>Maybe that works for him, but as I explained above, and in the link I sent him, it doesn't work for me, because I wind up with MP3 files encoded at 48KHz. I don't want to get too far into the weeds here, but I believe that making 48KHz MP3 files for podcasts are fairly pointless for most users, since they will need to be resampled on playback and resampling is lossy. For most listeners playing the files back on typical devices, 48KHz is a waste of storage space and will not provide a quality boost over a 44.1KHz file.</p>
<p>I also want to be able to use comment fields like this:</p>
<blockquote>
<p>This work by Paul R. Potts is released under the Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License (http://creativecommons.org/licenses/by-nc-sa/3.0/). See http://generalpurposepodcast.blogspot.com for more information.</p>
</blockquote>
<p>And not have them truncated.</p>
<p>In the Hydrogen Audio forum I got a couple of useful replies, but no solutions as such. The root of the problem seems to be that iTunes silently fails to import certain types of comment fields in MP3 tags. I don't want to go too far down the ID3 rabbit hole, but it seems like if the language specified for the comment field is "XXX," iTunes will not import it.</p>
<p>When <em>LAME</em> writes an ID3v2 comment tag, it seems to set the language to either some unicode string, or if the command-line switch <em>--id3v2-latin1</em> is used, to "XXX." There doesn't seem to be an option to set it to something else. In either case, iTunes will not import this field.</p>
<p>My wife Grace said "your encoder is lame."</p>
<p>I tried to use the <em>id3v2</em> command-line tool to add the tags to my MP3 file instead. Since I'm making a script, I have no qualms about using two command-line tools instead of one, if that works. But <em>id3v2</em> seems to have the same problem. It supports the <em>--id3v2-only</em> switch, but there does not seem to be any equivalent of the <em>--id3v2-latin1</em> switch, so I can't get it to write an iTunes-compatible comment field either.</p>
<p>My workaround was to rebuild <em>LAME</em>, replacing several hard-coded instances of "XXX" with "eng." This is not exactly a bug to report, since I'm not sure that <em>LAME</em> is actually "wrong" per se, according to the standard. But I can't fix the issue in iTunes. And like it or not, if I want maximum compatibility, I have to generate files that work well with iTunes.</p>
<p>A better solution would probably be to give <em>LAME</em> more options. Specifically for this problem an option to set the comment language would be nice. I would hesitate to add a specific switch like <em>--work<em>around</em>itunes</em> which just forced the comment language to "eng," but maybe the ability to set the comment language would be useful for other folks, while providing a workaround for this compatibility problem.</p>
<p><em>Ypsilanti, Michigan</em><br />
<em>August 12, 2017</em></p>
Paul R. Pottshttp://www.blogger.com/profile/04401509483200614806noreply@blogger.com0tag:blogger.com,1999:blog-21054185.post-57470029434478391372017-08-08T19:00:00.000-04:002017-08-08T19:09:52.383-04:00Two Quick Technical Tidbits<p>Trump is blathering about about "fire, fury, and - frankly - power," and I'm trying to thinkg about something less distressing, so I'm going to write briefly about two technical issues I've come across, in the hopes that Google places this within reach of someone else looking for a solution.</p>
<h2>Creating a Shared Memory Block with Keil µVision and the Atmel SAM4E16E</h2>
<p>In my work, I have created firmware for the Atmel SAM4E16E, a nifty chip with an ARM core. My toolchain is Keil µVision version 5.23.</p>
<p>What I'm trying to do is conceptually very simple. We have a bootloader image and an application image. When the bootloader executes, I want it to put some specialized values into a block of memory, at a fixed, known address. When the bootloader executes to run the main application, I want the main application to read some information from that block of memory.</p>
<p>The challenge is not so much in writing the C code to handle this -- that's quite easy. The challenge is configuring your tools to get out of your way. In particular, you want the linker to set aside the memory in the right place, and the startup code to leave it alone (not zero it out). Documentation on this is a bit sparse and confusing. It took me quite a bit of trial and error to get it working. Along the way I discovered that the tools are both more poorly-documented and less robust than I hoped. But it did work, and here's how.</p>
<p>First, I created a common header file for describing the memory structure. It looks something like this:</p>
<pre><code>typedef struct Bootloader_Shared_Memory_s
{
uint32_t prefix;
uint32_t version;
uint32_t unused0;
uint32_t unused1;
uint32_t unused2;
uint32_t unused3;
uint32_t unused4;
uint32_t unused5;
uint32_t unused6;
uint32_t unused7;
uint32_t suffix;
} Bootloader_Shared_Memory_t;
extern Bootloader_Shared_Memory_t bootloader_shared_memory;</code></pre>
<p>That just gives us a series of 32-bit words in memory. I want to set a prefix and suffix value to some special values that I will look for, to see if the shared memory block looks like it was configured as I expect. It is very unlikely that the prefix and suffix would have these values unless they were deliberately put there.</p>
<pre><code>#define BOOTLOADER_SHARED_DATA_PREFIX ( 0x00ABACAB )
#define BOOTLOADER_SHARED_DATA_SUFFIX ( 0xDEFEEDED )</code></pre>
<p>Then I just define a function that will configure the shared memory, instead of specifying inital values in the definition. That's because I don't want the compiler to treat this block as part of its initialized memory (the ".data" section of memory). See: <a href="https://en.wikipedia.org/wiki/Data_segment" class="uri">https://en.wikipedia.org/wiki/Data_segment</a></p>
<pre><code>void Configure_Bootloader_Shared_Memory( void );</code></pre>
<p>The bootloader code calls this function, which looks like this:</p>
<pre><code>void Configure_Bootloader_Shared_Memory( void )
{
bootloader_shared_memory.prefix = BOOTLOADER_SHARED_DATA_PREFIX;
bootloader_shared_memory.version = ( ( BOOTLOADER_REVISION_HIGH << 16 ) |
( BOOTLOADER_REVISION_MIDDLE << 8 ) |
( BOOTLOADER_REVISION_LOW ) );
bootloader_shared_memory.suffix = BOOTLOADER_SHARED_DATA_SUFFIX;
}</code></pre>
<p>Where <em>BOOTLOADER_REVISION_HIGH</em> etc. are #defines which specify the current version number.</p>
<p>Now, we've declared the data structure and made a function that operates on it. We need to define the data structure. I do this in a separate C file:</p>
<pre><code>/*
Highest possible base address (last RAM address is 0x2001FFFF), masked to zero off two low bits of address for 4-byte alignment
*/
#define BOOTLOADER_SHARED_MEMORY_BASE_ADDRESS ( ( 0x20020000 - sizeof( Bootloader_Shared_Memory_t ) ) & 0xFFFFFFFC )
/*
Note: comes out to 0x2001FFD4, the data structure is 0x2C bytes long; will have to change the scatter file if our data structure changes
*/
__attribute__((at(BOOTLOADER_SHARED_MEMORY_BASE_ADDRESS),zero_init))
Bootloader_Shared_Memory_t bootloader_shared_memory;</code></pre>
<p>Note that non-standard <em><strong>attribute</strong></em>. It creates a separate memory area to be passed to the linker. It specifies the base address. The *zero_init* is confusing; suffice it to say that I found it mentioned in ARM's documentation as a workaround. It doesn't mean, apparently, "initialize this block of memory to zero." It seems to mean "do no initialization of this block of memory."</p>
<p>Now, you might think that we've told the linker enough, but apparently we haven't. We have to use a scatter-gather file. This is a file with the extension .sct. Now, we're getting pretty deep in the weeds. The documentation on this file format is pretty sparse. I found few useful examples. The standard way to build my project seemed to result in the toolchain creating its own scatter-gather file, which looks like this:</p>
<pre><code>LR_IROM1 0x00400000 0x000E0000 { ; load region size_region
ER_IROM1 0x00400000 0x000E0000 { ; load address = execution address
*.o (RESET, +First)
*(InRoot$$Sections)
.ANY (+RO)
}
RW_IRAM1 0x20000000 0x00020000 { ; RW data
.ANY (+RW +ZI)
}
}</code></pre>
<p>Apparently this specifies the memory regions which the linker will use. It hunts through them applying some rules to determine if each code or data object can go in a region. The RW_IRAM1 is our RAM.</p>
<p>We can create our own scatter-gather file by starting with this tool-generated file and adding a region:</p>
<pre><code>LR_IROM1 0x00400000 0x000E0000 { ; load region size_region
ER_IROM1 0x00400000 0x000E0000 { ; load address = execution address
*.o (RESET, +First)
*(InRoot$$Sections)
.ANY (+RO)
}
bootloader_shared 0x2001FFD4 UNINIT 0x0000002C {
Bootloader_Shared_Memory.o (+RW +ZI)
}
RW_IRAM1 0x20000000 0x00020000 { ; RW data
.ANY (+RW +ZI)
}
}</code></pre>
<p>I've added a special region which will fit any data object in the <em>Bootloader<em>Shared</em>Memory.o</em> object file. I've added the special obscure "UNINIT," which I also found mentioned in some somewhat obscure ARM documentation.</p>
<p>Anyway, to make a long story short, this seems to work for me, while any variation of it does <em>not</em> work for me. If you mess up your scatter-gather file a bit, you can see strange behavior -- for example, my image file became over a gigabyte in size at one point. The toolchain doesn't seem very robust as far as error-checking goes, here. If you don't use both the special "UNINIT" and "zero_init" specifiers, things don't work right. It was quite confusing.</p>
<p>To make the build process use the custom scatter-gather file, I had to turn off the checkbox in the project's linker options that reads "Use Memory Layout from Target Dialog." Then I had to provide a path to the scatter file. The linker options dialog should show a -scatter option with a path to your custom scatter-gather file. It isn't enough just to provide a file.</p>
<p>You should be able to tell if it worked right by looking at the link map. It shows the address of the shared memory object. The attribute created the special memory region <em>named</em> with its address.</p>
<pre><code>.ARM.__AT_0x2001FFD4 0x2001ffd4 Section 44 bootloader_shared_memory.o(.ARM.__AT_0x2001FFD4)</code></pre>
<p>Then, in my application firmware, which reads the information from the shared memory object, I have to configure the linker to use the same scatter-gather file. Then, elsewhere in my code, I can look for the special prefix and suffix to decide if I want to treat the version field in the structure as legitimate data:</p>
<pre><code>if ( ( BOOTLOADER_SHARED_DATA_PREFIX == bootloader_shared_memory.prefix ) &&
( BOOTLOADER_SHARED_DATA_SUFFIX == bootloader_shared_memory.suffix ) )
{
/*
Do something with bootloader_shared_memory.version
*/
}</code></pre>
<p>This may not be the absolute best or simplest way to achieve what I set out to achieve -- if I've done something glaringly wrong, please leave a comment. But it seems to work, and so I hope someone else might find this useful.</p>
<h2>Tagging MP3 files with LAME (Unsolved Mysteries)</h2>
<p>I've been trying to simplify my podcast workflow. My old workflow, in part, looked like this:</p>
<pre><code>1. Bounce a track from Logic Pro at a 24 bit/96KHz. Tell Logic to do the conversion to 16 bit/44.1KHz.
2. Don't ask Logic to make an MP3 because if I do it this way, it creates a 48KHz MP3, and I don't want that.
3. Import the 16 bit/44.1KHz track into iTunes.
4. Have iTunes create the MP3 file (annoying because I have to configure the conversion using CD import settings).
5. Use the iTunes "Get Info" tag editor to tag the MP3 file.
6. Find where iTunes put it using "Show File in Finder" and get it back out, then that becomes the tagged file I upload to my server.</code></pre>
<p>I'd rather do it like this instead:</p>
<pre><code>1. Bounce a track from Logic Pro at a 24 bit/96KHz. Tell Logic to do the conversion to 16/44.1.
2. Use a little shell script to do both the conversion to MP3 conversion and tagging with LAME.</code></pre>
<p>This seems to work fine, in that it creates the MP3 file with the bit rate I expect, and the tags I expect, as seen in the MacOS X Finder. That is, if I do "Get Info" on the MP3 file, it shows all the tag I added, including the comments field.</p>
<p>However, if I then bring that MP3 file into iTunes, either just by importing it or by downloading it via my podcast feed, the tags all work except for the comment field. The comment field appears to be empty.</p>
<p>If I look at the LAME-tagged file with Windows file explorer using "Properties," the comment tag shows up as expected. When I look at the iTunes-tagged file, it shows the "Comments" field as either containing 0, or some hexadecimal garbage. In other words, with the Windows file explorer the iTunes-tagged file appears broken, while the LAME-tagged file looks fine. So clearly there is some kind of compatibility problem going on.</p>
<p>Using a Windows program called "Tag Clinic," I can see a difference -- it looks like the LAME-created file has a comment tag that is ID3v2.3 with the Language field showing nothing. The iTunes-tagged file shows as ID3v2.2 with the Language field showing English.</p>
<p>So the question seems to be: can I make LAME export a tag that iTunes will read? If not, is this enough of a deal-breaker that I have to stop using LAME? I would like to create MP3 files with maximum compatibility, and a lot of folks (including me) use iTunes. I care less about "who is broken/buggy/not honoring the standard" as "how can I make my workflow easier while still creating podcast files that will work well for most users."</p>
<p>I didn't get any suggestions yet on Reddit. I had a comment on the Hydrogen Audio forum, suggesting that I "try forcing 2.2" (use ID3 version 2.2 comment tags). However, I don't see a way to force LAME to do that. There's a <em>--id3v2-only</em> switch, but I think that turns off the use of version 1 tags.</p>
<p>I'm left wondering if there is a different command-line utility that will encode and tag my file (ffmpeg, maybe?) Or if I should skip having LAME add the comment tag and instead use a separate command-line utility, id3tag.</p>
<p>But I can't help feeling that I'm missing something -- can the most popular command-line encoding tool really not create MP3 files that play nicely with iTunes? It seems like there must be a way to make this easier. If you've got a suggestion, please leave a comment. Comments are moderated, so I'll have to approve them.</p>
<p><em>Ypsilanti, Michigan</em><br />
<em>August 8, 2017</em></p>
Paul R. Pottshttp://www.blogger.com/profile/04401509483200614806noreply@blogger.com0tag:blogger.com,1999:blog-21054185.post-52019940220755383182016-10-06T20:03:00.005-04:002016-10-07T11:02:48.041-04:00Fixed Point Math with AVR-GCC<p>Wow, I see that it has been a long time since my last post. Sorry about that. I've been very busy. I have lots to talk about. I'd like to write about reading encoders, and I'd like to write about communicating with EEPROM chips that use two-wire protocols (I2C-like) as opposed to SPI-like protocols. But in the meantime I hope this short post will be useful to someone.</p>
<h2>Embedded C</h2>
<p>I recently had reason to do some non-integer math on a small microcontroller, a member of the Atmel ATtiny series. Floating point math on this chip is pretty much out of the question; there is no floating-point hardware. I think some of the chips in this family are big enough to hold floating-point library functions, but they will certainly eat up an enormous amount of the available program space, and given that they are eight-bit microcontrollers in most ways -- the registers are 8 bits wide -- it is probably best to just avoid floating point.</p>
<p>So I began looking into fixed-point math. It is always possible to roll your own code for this kind of thing, but I thought I would see if I could take advantage of existing, debugged library code first. I found some free software libraries online, but because I develop code that runs in commercial products, I was not really happy with their license terms. It also was not very clear how to use them or whether they would fit on the ATtiny chips.</p>
<p>I discovered that there is, in fact, a standard for fixed-point types in C. It has not been widely adopted, and like the C standard itself it is a little loose in parts, in that it doesn't dictate the numeric limits of types, but rather specifies a range of acceptable sizes. And it turns out that my toolchain supports this standard, at least in part.</p>
<p>I won't try to describe everything covered in the Embedded C document. I'll spare you my struggle trying to find adequate documentation for it or determine how to do certain things in an implementation that doesn't implement everything in the Embedded C document.</p>
<p>Instead I will try to do something more modest, and just explain how I managed to use a couple of fixed-point types to solve my specific problems.</p>
<p>You can find more information on the Embedded C standard here: <a href="https://en.wikipedia.org/wiki/Embedded_C" class="uri">https://en.wikipedia.org/wiki/Embedded_C</a></p>
<p>The actual Embedded C standards document in PDF form can be found here (note: this is a link to a PDF file): <a href="http://www.open-std.org/JTC1/SC22/WG14/www/docs/n1169.pdf" class="uri">http://www.open-std.org/JTC1/SC22/WG14/www/docs/n1169.pdf</a>.</p>
<p>At the time of this writing, this seems to be the latest version available, dated April 4, 2006. The document indicates a copyright, but unlike the C and C++ standards, it looks like you can download it for no cost, at least at present.</p>
<h2>avr-gcc</h2>
<p>The compiler I'm using is <strong>avr-gcc</strong>. My actual toolchain for this project is Atmel Studio version 7.0.1006. Atmel Studio is available for download at no cost. The <strong>avr-gcc</strong> toolchain that Atmel Studio uses under the hood is available in other toolchains and as source code. I'm not going to try to document all the ways you can get it, but you can find out more here: <a href="https://gcc.gnu.org/wiki/avr-gcc" class="uri">https://gcc.gnu.org/wiki/avr-gcc</a>.</p>
<p>As I understand it, these Embedded C extensions are not generally in across other versions of GCC.</p>
<h2>The Basics of Fixed Point Types in Embedded C</h2>
<p>I'm assuming I don't have to go into too much detail about what fixed-point math is. To put it briefly, fixed point types are like signed or unsigned integral types except there is an implicit binary point (not a decimal point, a binary point). To the left of that binary point, the bits indicate ascending powers of two as usual: 1, 2, 4, 8, etc. To the right of that binary point, the bits indicate fractional powers of two: 1/2, 1/4, 1/8.</p>
<p>The Embedded C extensions for fixed-point math came about, I believe, at least originally because many microcontrollers and digital signal processors have hardware support for fixed-point math. I've used DSPs from Motorola and Texas Instruments that offered accumulators for fixed-point math in special wide sizes, such as 56 bits, and also offered saturation arithmetic. Using these registers from C required special vendor-specific compiler support. If they were supported instead using Embedded C types, programmers would have a better shot at writing portable code.</p>
<p>There are a couple of basic approaches to these types mentioned in the standard. There are <em>fractional</em> types, indicated with the keyword <em><strong>Fract</strong>, with values between -1.0 and 1.0, and types that have an integral part and a fractional part, indicated with the keyword </em><strong>Acum</strong>. It is expected that implementations will give these aliases, like <strong>fract</strong> and <strong>accum</strong>, but I think the authors did not want to introduce potential name clashes with existing code.</p>
<p>The standard specifies the <em>minimal</em> formats for a number of types. For example, <strong>unsigned long accum</strong> is provides a minimum of 4 integer bits and 23 fractional bits. In our implementation, <strong>unsigned long accum</strong> actually provides 32 integral bits and 32 fractional bits. It maps to an underlying type that can hold the same number of bits. On this platform, that underlying type is <strong>unsigned long long</strong>, which on this platform is 64 bits.</p>
<h2>Accumulator Types</h2>
<p>For my algorithms, I don't have much interest in the <em><strong>Fract</strong> types and I'm going to use only the </em><strong>Accum</strong> types. I would have more interest in <strong>Fract_ types if there were standard ways available to multiply them together. In that case I could use a </strong>Fract_ type as a scale factor to apply to a large-ish integer in <em><strong>Accum</strong> representation. For example, let's say I want to generate an unsigned binary value to send to a DAC that accepts 18-bit values. I could create a value of an </em><strong>Accum</strong> type that represents the largest 18-bit value, and scale this by a _<strong>Fract</strong> value indicating a fraction to apply.</p>
<p>The advantage of this approach would be, I thought, that I would use types that were only as wide as I needed, resulting in less code. However, since this does not seem easy or convenient to do, in my own code I am using only _<strong>Accum</strong> types at present.</p>
<p>And, in fact, I'm using only unsigned _<strong>Accum</strong> types, specifically types that have a 16-bit unsigned integer value and a 16-bit fractional value (aka “16.16”), <strong>unsigned accum</strong>, and a 32-bit unsigned integer value and a 32-bit fractional value (aka “32.32”), <strong>unsigned long accum</strong>. The underlying types used to implement <strong>unsigned accum</strong> and <strong>unsigned long accum</strong> are <strong>unsigned long</strong> (32 bits) and <strong>unsigned long long</strong> (64 bits).</p>
<h2>Fixed Point Constants</h2>
<p>There are new suffixes to allow specifying fixed-point constants. For example, instead of specifying <strong>15UL</strong> (for <strong>unsigned long</strong>), one can write <strong>15UK</strong> for an <strong>unsigned accum</strong> type, or <strong>15ULK</strong> for an <strong>unsigned long accum</strong> type. One can specify the fractional part, like <strong>1.5UK</strong>.</p>
<p>On this platform, <strong>1.5UK</strong> assigned to a variable of <strong>unsigned accum</strong> type will produce the 16.16 bit pattern <strong>0000 0000 0000 0001 1000 0000 0000 0000</strong> (hex <strong>00018000</strong>), where the most significant 16 bits represent the integer part, and the least significant 16 bits represent the fractional part.</p>
<h2>Accuracy</h2>
<p>For our purposes we will mostly be using the integer results of fixed-point calculations. We don’t need to use the <strong>FX_FULL_PRECISION</strong> pragma; error of 2 ULPs for multiplication and division operations is fine.</p>
<h2>A Very Simple Example</h2>
<p>Here's a small program that shows a very simple calculation using <strong>unsigned accum</strong> types. I created a simple project in Atmel Studio that targets the ATtiny 841 microcontroller, which has 512 bytes of SRAM and 8 KiB of flash memory for programs. Today I'm not using a hardware debugger or attached chip. It is possible to configure the project's "Tool" settings to use a simulator instead of a hardware debugger or programmer.</p>
<pre><code>#include <avr/io.h>
#include <stdfix.h>
static unsigned accum sixteen_sixteen_half = 0.5UK;
static unsigned accum sixteen_sixteen_quarter = 0.25UK;
static unsigned accum sixteen_sixteen_scaled;
int main(void)
{
sixteen_sixteen_scaled = sixteen_sixteen_half * sixteen_sixteen_quarter;
}</code></pre>
<p>We can watch this run in the debugger. In fact, this is the reason for including the <strong>volatile</strong> keyword in the variable declarations. Even with optimizations turned off, the compiler will still aggressively put variables in registers and avoid using memory at all if it can. While I don't seem to be able to use watches on these variables, as I can when using a hardware debugger and microcontroller, I can see the values change in memory as I step through the program. The values are organized as little-endian. Translating this, I can see that <strong>sixteen_sixteen_half</strong> shows up as <strong>0x00008000</strong>, <strong>sixteen_sixteen_quarter</strong> shows up as <strong>0x00004000</strong>, and the result of the multiplication operation, <strong>sixteen_sixteen_scaled</strong>, is assigned <strong>0x00002000</strong>, representing one-eighth.</p>
<h2>Code Size</h2>
<p>If I bring up the Solution Explorer window (via the View menu in Atmel Studio), I can take a look at the output file properties by right-clicking. The generated .hex file indicates that it is using 310 bytes of flash. If I do the same calculation using float types, the library support for floating-point multiplication makes the flash use 580 bytes.</p>
<p>What happens if I scale up to a larger type? Well, if I change my <strong>unsigned accum</strong> declarations to use <strong>unsigned long accum</strong>, suddenly my flash usage goes up to 2776 bytes. That's a lot given that I have 8192 bytes of flash, but it still leaves me quite a bit of room for my own program code.</p>
<h2>A Few Techniques</h2>
<p>Let's say we want to scale a value to send to a linear DAC. Our DAC accepts 18-bit values. That means we can send it a value between <strong>0x0</strong> and <strong>0x3FFFF</strong>.</p>
<p>To work directly with an _<strong>Accum</strong> type that will represent these values, we have to use an <strong>unsigned long accum</strong>. To declare an <strong>unsigned long accum</strong> variable that is initialized from an <strong>unsigned long</strong> variable, I can just cast it:</p>
<pre><code>unsigned long accum encoder_accum = ( unsigned long accum )encoder_val;</code></pre>
<p>We can also cast from a shorter integral type -- for example, from an <strong>unsigned accum</strong> -- and get the correct results. Beware of mixing signed and unsigned types! (As you always should, when working in C).</p>
<p>We can do math on our <strong>unsigned long accum</strong> types using the usual C math operators.</p>
<p>Let's say we want to get the <strong>unsigned long accum</strong> value converted back to an integral type. How would we do that? We use <strong>bitsulk</strong> to get the bitwise value (this is actually just a cast operation under the hood). Because we're going to truncate the fractional part, I add <strong>0.5ULK</strong> first.</p>
<pre><code>unsigned long val = bitsulk( encoder_accum + 0.5ULK ) >> ULACCUM_FBIT;</code></pre>
<p>If we want the remainder as an <strong>unsigned long accum</strong>, we can get it. Remember that the fractional part of the accumulator type be [0.0..1.0] (that is, inclusive of zero, exclusive of 1). Note that the use of the mask here is not very portable, although there are some tricks I could do to make it more portable, but for now, I am more concerned about readability.</p>
<pre><code>unsigned long accum remainder = ulkbits( bitsulk( encoder_accum ) & 0xFFFFFFFF );</code></pre>
<p>The <strong>ulkbits</strong> and <strong>bitsulk</strong> operations are just casts, under the hood, so this boils down to a shift and mask.</p>
<p>The Embedded C specification defines a number of library functions that work with the fractional and accumulator types. For example, abslk() will give absolute value of an <strong>unsigned long accum</strong> argument. There are also rounding functions, like roundulk(). I have not actually had need of these. They seem to be supported in avr-gcc, but so far I have not needed them.</p>
<h2>Conclusion</h2>
<p>I hope this very brief tutorial may have saved you some time and aggravation in trying to use these rather obscure, but very useful, language features. If you come across anything interesting having to do with <strong>avr-gcc</strong>'s support for the Embedded C fixed-point types, please leave a comment!</p>
<p><em>Saginaw, Michigan</em><br />
<em>October 6, 2016</em></p>
Paul R. Pottshttp://www.blogger.com/profile/04401509483200614806noreply@blogger.com0tag:blogger.com,1999:blog-21054185.post-31296937975290370832016-02-26T13:51:00.000-05:002016-02-26T14:21:43.826-05:00SPI Communications with the Arduino Uno and M93C46 EEPROM: Easy, Fun, Relaxing<p>When I write code for an embedded microprocessor, I frequently need to use communications protocols that allow the micro to communicate with other chips. Often there are peripherals built in to the micro that will handle the bulk of the work for me, freeing up micro clock cycles and allowing me to write fewer lines of code. Indeed, the bulk of modern microcontroller datasheets is usually devoted to explaining these peripherals. So, if you aren't trying to do anything unusual, your micro may have a peripheral that will do most of the work for you. There might be a pre-existing driver library you can use to drive the peripheral. But, sometimes, you don't have a peripheral, or it won't do just what you need it to do, for one reason or another In that case, or if you just want to learn how the protocols work, you can probably seize control of the GPIO pins and implement the protocol yourself.<p>
<p>That's what I will do, in the example below. I will show you how to implement the SPI (Serial Peripheral Interface) protocol, for communicating with an EEPROM. I've used SPI communication in a number of projects on a number of microcontrollers now. The basics are the same, but there are always issues to resolve. The SPI standard is entertaining keeps you on your toes, precisely because it is so non-standard; just about every vendor extends or varies the standard a bit.</p>
<p>The basics of SPI are pretty simple. There are four signals: <i>chip select</i>, <i>clock</i>, <i>incoming data</i>, and <i>outgoing data</i>. The protocol is <i>asymmetrical</i>; the microcontroller is usually the master, and other chips on the board are slaves -- although it would be possible for the micro to act as a slave, too. The asymmetry is because the master drives the chip select and clock. In a basic SPI setup, the slaves don't drive these signals; the slave only drives one data line. I'll be showing you how to implement the <i>master's</i> side of the conversation.</p>
<p><i>Chip select</i>, sometimes known as <i>slave select</i> from the perspective of the slave chip, is a signal from the master to the slave chip. This signal cues the slave chip, informing the chip that it is now "on stage," ready for its close-up, and it should get ready to communicate. Whether the chip select is active high, or active low, varies. Chip select can sometimes be used for some extra signalling, but in the basic use case the micro set the chip select to the logically active state, then after a short delay, starts the clock, runs the clock for a while as it sets and reads the data signals, stops the clock, waits a bit, and turns off the chip select.</p>
<p>Here's a picture showing the relationship between clock and chip select, as generated by my code. Note that I have offset the two signals slightly in the vertical direction, so that it is easier to see them:</p>
<div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-IKlP-PJT9Cg/VtCSTXo85MI/AAAAAAAADXs/gDBabsk4a3g/s1600/spi_clock_and_chip_select.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://2.bp.blogspot.com/-IKlP-PJT9Cg/VtCSTXo85MI/AAAAAAAADXs/gDBabsk4a3g/s640/spi_clock_and_chip_select.JPG" /></a></div>
<p>The <i>clock</i> signal is usually simple. The only common question is whether the clock is high when idle, or low when idle. Clock speeds can vary widely. Speeds of 2 to 10 MHz are common. Often you can clock a part much slower, though. CMOS parts can be clocked at an arbitrarily slow speed; you can even stop the clock in the middle of a transfer, and it will wait patiently.</p>
<p>What is less simple is the number of clocks used in a transaction. That can become very complex. Some parts use consistent transfer lengths, where for each transaction, they expect the same number of clock cycles. Other parts might use different numbers of clock cycles for different types of commands.</p>
<p>From the perspective of the slave, the <i>incoming data</i> arrives on a pin that is often known, from the perspective of the microcontroller, as MOSI (master out, slave in). This is again a simple digital signal, but the exact way it is interpreted can vary. Essentially, one of the possible clock transitions tells the slave to read the data. For example, if the clock normally idles low, a rising clock edge might signal the slave to read the data. For reliability, it is very important that the master and slave are in agreement about which edge triggers the read. Above all, you want to avoid the case where the slave tries to read the incoming data line on the wrong edge, the edge when the master is allowed to change it. If that happens, communication <i>might</i> seem to work, but it works only accidentally, because the slave just happens to catch the data line slightly after it has changed, and it may fail when the hardware parameters change slightly, such as when running at a higher temperature.</p>
<p>Let me be as clear as I can: when implementing communication using SPI, be certain you are <i>very</i> clear about the idle state of the clock line, and which clock transition will trigger the slave to read the data line. Then, make sure you only change the data line on the <i>opposite</i> transition.</p>
<p>Terminology surrounding SPI transactions can be very confusing. According to <a href="https://en.wikipedia.org/wiki/Serial_Peripheral_Interface_Bus#Clock_polarity_and_phase">Wikipedia</a> and <a href="http://www.byteparadigm.com/applications/introduction-to-i2c-and-spi-protocols/">Byte Paradigm</a>, <i>polarity zero</i> means the clock is zero (low) when inactive; <i>polarity one</i> means the clock is one (high) when inactive.</p> <i>Phase zero</i> means the slave reads the data on the leading edge, and the master can change the value on the trailing edge, while <i>phase one</i> means the slave reads the data line on the rising edge, and the master changes the data line on the falling edge).</p>
<p>But some Atmel documentation (like this <a href="http://www.atmel.com/Images/Atmel-42209-SAM4-Serial-Peripheral-Interface-SPI_AT07890_ApplicationNote.pdf">application note PDF file</a>) uses the opposite meaning for "phase," where <i>phase one</i> means the slave reads data on the leading edge.</p>
<p>Because of this confusion, in my view it is best not to specify a SPI implementation by specifying "polarity" and "phase." So what would be clearer?</p>
<p><a href="http://www.totalphase.com/protocols/spi/?___SID=U">Aardvark</a> tools use the terms "rising/falling" or "falling/rising" to describe the clock behavior, and "sample/setup" or "setup/sample" to indicate the sampling behaviors. I find this to be less ambiguous. If the clock is "rising/falling," it means that the clock is low when idle, and rises and then falls for each pulse. If the "sample" comes first, it means that the slave should read the data line on the leading edge, and if the "setup" comes first, it means that the slave should read the data on the trailing edge.</p>
<p>Here's a picture of my clock signal along with my MOSI (master out, slave in) signal. This SPI communication variant is "rising/falling" and "sample/setup." In order to allow the slave to read a valid bit on the <i>leading</i> clock edge, my code sets the MOSI line to its initial state <i>before</i> the rising edge of the first clock pulse. Again, I have offset the signals slightly in the vertical direction, so that it is easier to see them:</p>
<div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-8XjJ2cqCpCk/VtCTsJbSTvI/AAAAAAAADX4/a-omMluLcL0/s1600/spi_clock_and_MOSI.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://3.bp.blogspot.com/-8XjJ2cqCpCk/VtCTsJbSTvI/AAAAAAAADX4/a-omMluLcL0/s640/spi_clock_and_MOSI.JPG" /></a></div>
<p>In the screen shot above, the master is sending nine bits: 100110000. Each bit is sampled on the rising clock edge. On the first rising clock edge, the MOSI line (in blue) is high. On the second rising clock edge, the MOSI line is low.</p>
<p>From the perspective of the slave, the <i>outgoing data</i> is sent on a pin that is often known as MISO (master in, slave out). This works in a similar way as the incoming data, except that the slave asserts the line.</p>
<p>When the master <i>sends</i> data to the slave, the master turns on the chip select (whether that means setting it low, or setting it high), changes the MOSI line and clock as needed, and then turns off the chip select.</p>
<p>When the master <i>receives</i> data from the slave, the behavior is slightly more confusing. To get data from the slave, the master has to generate clock cycles. This means that it is also <i>sending</i> something, depending on how it has set the MOSI line. During the read operation, what it is <i>sending</i> may consist of "I don't care" bits that the slave will not read. Receiving data can sometimes require one transaction to prepare the slave for the read operation, and then another to "clock in" the data. Sometimes a receive operation may be done as one transaction, but with two parts: the master sends a few bits indicating a read command, and then continues to send clock cycles while reading the slave's data line. Sometimes there are dummy bits or extra clock cycles in between the parts of this transaction.<p>
<p>Here's a picture that shows a read operation. I'm showing clock and MISO (mmmm... miso!) This shows a long transaction where the master sends a request (the MOSI line is not shown in this picture) and then continues to generate clock pulses while the slave toggles the MISO line to provide the requested data.</p>
<div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-klPE2N_M51Y/VtCWIqgS5JI/AAAAAAAADYE/jTJvv0dTkPg/s1600/spi_clock_and_MISO.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://2.bp.blogspot.com/-klPE2N_M51Y/VtCWIqgS5JI/AAAAAAAADYE/jTJvv0dTkPg/s640/spi_clock_and_MISO.JPG" /></a></div>
<p>Now let's look at my hardware and software. I wrote some code to allow an Arduino Uno to communicate with a serial EEPROM chip. The chip in question is a M93C46 part. This is a 1Kb (one kilobit, or 1024 bits) chip. The parts are widely available from different vendors. I have a few different through-hole versions that I got from various eBay sellers; in testing them, they all worked fine. The datasheet I used for reference is from the <a href="http://www.st.com/web/catalog/mmc/FM76/CL1276/SC112/PF63991">ST Microelectronics</a> version of the part.</p>
<p>These parts all seem to have similar pinouts. Pin 1 is the chip select, called slave select in the STM documentation. Pin 2 is the clock. Pins 3 and 4 are data pins. On the other side of the chip, there is a pin for +5V or +3.3V, a pin for ground, an unused pin presumably used by the manufacturer for testing, and a pin identified as ORG (organization), which determines whether the data on the chip is organized into 64 16-bit words, or 128 8-bit bytes.<p>
<p>There are other versions of this chip; the 1Kb is only one version. The command set differs slightly between sizes, but it should be pretty easy to adapt my example to a different-sized part. A full driver would be configurable to handle different memory sizes. It would not be hard to implement that, but for this example I am keeping things simple.<p>
<p>Here's my simple circuit, on a prototype shield mounted to an Arduino Uno:</p>
<div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-rogjC-Y3gos/VtCWkfL-lGI/AAAAAAAADYI/EbfyqDu16cs/s1600/proto_shield.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://1.bp.blogspot.com/-rogjC-Y3gos/VtCWkfL-lGI/AAAAAAAADYI/EbfyqDu16cs/s640/proto_shield.jpg" /></a></div>
<p>Here's a simple schematic showing the Arduino pins connected to the EEPROM chip:</p>
<div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-apuAL3QDslg/VtCWz9qfQZI/AAAAAAAADYM/n8S33YEvf7Y/s1600/EEPROM_pins.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://4.bp.blogspot.com/-apuAL3QDslg/VtCWz9qfQZI/AAAAAAAADYM/n8S33YEvf7Y/s640/EEPROM_pins.jpg" /></a></div>
<p>I'm not much of an electrical engineer, but that should convey that pin 1, usually marked with a little dot or tab on the chip, is on the lower right. We count pins counter-clockwise around the chip. So pin 5 goes to ground (I used the ground next to the data pins; that is the green wire going across the board). Make sure you are careful to connect the right pins to power and ground, or you can let the magic smoke one of these little EEPROM chips, and maybe disable your Arduino board, too, perhaps permanently (you'll never guess how I know this!)</p>
<p>I also have three LEDs connected to three more pins, connected through 220 ohm resistors, with the negative side of the LEDs going to a ground pin on the left side of the prototype board. Those are not required; they are there solely to create a simple busy/pass/fail display. You can use the serial monitor, if the Arduino is attached to your computer, or whatever other debugging method is your favorite.</p>
<p>I have done this kind of debugging with elaborate, expensive scopes that have many inputs and will decode SPI at full speed. That is very nice, but you don't necessarily need all for a simple project like this. I got this project working using a Rigol two-channel scope. I was not able to capture a trace of all our lines at once using this scope, but I didn't need to. With two channels, I could confirm that the chip select and clock were changing correctly with respect to each other. Then I could look at the MOSI along with the clock and verify that the data was changing on the expected clock transition. Then I could look at the MISO along with the clock to verify the bits the Arduino was getting back from the serial EEPROM chip. Here's my modest setup, using a separate breadboard rather than a shield:</p>
<div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-D-9ShTn3zTs/VtCY730IKEI/AAAAAAAADYc/qR1Ovt4zh-Q/s1600/arduino_spi_test_setup.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://3.bp.blogspot.com/-D-9ShTn3zTs/VtCY730IKEI/AAAAAAAADYc/qR1Ovt4zh-Q/s640/arduino_spi_test_setup.JPG" /></a></div>
<p>Here's a view of a SPI conversation with the EEPROM chip: a write operation, followed by a read operation to verify that I can get back what I just wrote. This shows clock and MOSI, so we don't see the slave's response, but you can see that the second burst has a number of clock cycles where the master is not changing the data line. Those are "don't care" cycles where the master is listening to what the slave is saying. Note also that I am running this conversation at a very slow clock speed; each transition is 1 millisecond apart, which means that my clock is running at 500 <i>Hertz</i> (not MHz or even KHz). I could certainly run it faster, but this makes it easy to see what is happening, if I toggle an LED along with the chip select to show me when the master is busy.</p>
<div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-QsYeT94IMqI/VtCZYZ2w9PI/AAAAAAAADYg/pfslH7R_I64/s1600/spi_write_and_read.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://2.bp.blogspot.com/-QsYeT94IMqI/VtCZYZ2w9PI/AAAAAAAADYg/pfslH7R_I64/s640/spi_write_and_read.JPG" /></a></div>
<p>Now, here's some code.</p>
<p>You don't have to use these pins, but these are the ones I used.</p>
<pre>#define SLAVESELECT 10 /* SS */
#define SPICLOCK 11 /* SCK */
#define DATAOUT 12 /* MOSI */
#define DATAIN 13 /* MISO */</pre>
<p>Here's a "template" 32-bit word that holds a 16-bit write command.</p>
<pre>#define CMD_16_WRITE ( 5UL << 22 )
#define CMD_16_WRITE_NUM_BITS ( 25 )</pre>
<p>This defines a 25-bit command. There is a start bit, a 2-bit opcode, a six-bit address (for selecting addresses 0 through 63), and 16 data bits.</p>
<p>To use this template to assemble a write command, there's a little helper function:</p>
<pre>uint32_t assemble_CMD_16_WRITE( uint8_t addr, uint16_t val )
{
return ( uint32_t )CMD_16_WRITE |
( ( uint32_t )addr << 16 ) |
( uint32_t )val;
}</pre>
<p>Now we need a function that will send that command. First, let's start with a function that will send out a sequence of bits, without worrying about the chip select and final state of the clock.</p>
<pre>void write_bit_series( uint32_t bits, uint8_t num_bits_to_send )
{
uint8_t num_bits_sent;
for ( num_bits_sent = 0; num_bits_sent < num_bits_to_send;
num_bits_sent += 1 )
{
digitalWrite( SPICLOCK, LOW );
digitalWrite( DATAOUT, bits & ( 1UL <<
( num_bits_to_send - num_bits_sent - 1 ) ) ? HIGH : LOW );
delay( INTER_CLOCK_TRANSITION_DELAY_MSEC );
digitalWrite( SPICLOCK, HIGH );
delay( INTER_CLOCK_TRANSITION_DELAY_MSEC );
}
}</pre>
<p>This maps the bits to the DATAOUT (or MISO) line. We change the data line on the falling edge of the clock. We aren't using a peripheral to handle the SPI data; we just "bit bang" the outputs using a fixed delay.</p>
<p>Here's a function that will send a command that is passed to it. It works for write commands:</p>
<pre>void write_cmd( uint32_t bits, uint8_t num_bits_to_send )
{
digitalWrite( SLAVESELECT, HIGH );
delay ( SLAVE_SEL_DELAY_PRE_CLOCK_MSEC );
write_bit_series( bits, num_bits_to_send );
/*
Leave the data and clock lines low after the last bit sent
*/
digitalWrite( DATAOUT, LOW );
digitalWrite( SPICLOCK, LOW );
delay ( SLAVE_SEL_DELAY_POST_CLOCK_MSEC );
digitalWrite( SLAVESELECT, LOW );
}</pre>
<p>That's really all you need to send out a command. For example, you could send a write command like this:</p>
<pre>write_cmd( assemble_CMD_16_WRITE( addr, write_val ), CMD_16_WRITE_NUM_BITS );</pre>
<p>Note that before you can write successfully, you have to set the write enable. My code shows how to do that. Basically, you just define another command:</p>
<pre>#define CMD_16_WEN ( 19UL << 4 )
#define CMD_16_WEN_NUM_BITS ( 9 )
write_cmd( ( uint16_t )CMD_16_WEN, CMD_16_WEN_NUM_BITS );</pre>
<p>This EEPROM chip will erase each byte or word as part of a write operation, so you don't need to perform a separate erase. That may not be true of all EEPROM chips.</p>
<p>To read the data back, we need a slightly more complex procedure. Our read command uses the <b>write_bit_series</b> function to send out the first part of the read command, then starts clocking out "don't care" bits and reading the value of the MOSI line:</p>
<pre>uint16_t read_16( uint8_t addr )
{
uint8_t num_bits_to_read = 16;
uint16_t in_bits = 0;
uint32_t out_bits = assemble_CMD_16_READ( addr );
digitalWrite( SLAVESELECT, HIGH );
delay ( SLAVE_SEL_DELAY_PRE_CLOCK_MSEC );
/*
Write out the read command and address
*/
write_bit_series( out_bits, CMD_16_READ_NUM_BITS );
/*
Insert an extra clock to handle the incoming dummy zero bit
*/
digitalWrite( DATAOUT, LOW );
digitalWrite( SPICLOCK, LOW );
delay( 1 );
digitalWrite( SPICLOCK, HIGH );
delay( 1 );
/*
Now read 16 bits by clocking. Leave the outgoing data line low.
The incoming data line should change on the rising edge of the
clock, so read it on the falling edge.
*/
for ( ; num_bits_to_read > 0; num_bits_to_read -= 1 )
{
digitalWrite( SPICLOCK, LOW );
uint16_t in_bit = ( ( HIGH == digitalRead( DATAIN ) ) ? 1UL : 0UL );
in_bits |= ( in_bit << ( num_bits_to_read - 1 ) );
delay( INTER_CLOCK_TRANSITION_DELAY_MSEC );
digitalWrite( SPICLOCK, HIGH );
delay( INTER_CLOCK_TRANSITION_DELAY_MSEC );
}
/*
Leave the data and clock lines low after the last bit sent
*/
digitalWrite( DATAOUT, LOW );
digitalWrite( SPICLOCK, LOW );
delay ( SLAVE_SEL_DELAY_POST_CLOCK_MSEC );
digitalWrite( SLAVESELECT, LOW );
return in_bits;
}</pre>
<p>And that's the basics. To test this, I put an EEPROM chip on a breadboard and just wired up the pins as specified in the code. Check your datasheet to determine if you can power the part with 5V or 3V. The chips I got seem to work fine with either, although if you are testing with a scope, you might want to use 5V so that the data out you get back from the chip has the same level as the 5V Arduino outputs.</p>
<p>You can find the full sketch on GitHub <a href="http://github.com/paulrpotts/Arduino_M93C46">here</a>.</p>
<p>Good luck, and if you found this useful, let me know by posting a comment. Comments are moderated, so they will not show up immediately, but I will post all (non-abusive, non-spam) comments. Thanks for reading!</p>Paul R. Pottshttp://www.blogger.com/profile/04401509483200614806noreply@blogger.com0tag:blogger.com,1999:blog-21054185.post-46013856754014132452016-01-01T18:08:00.000-05:002016-01-03T12:38:44.729-05:00Star Wars: The Force Awakens<p><i>This review contains many spoilers.</i></p>
<p>I want to start out by saying that I really was expecting, even hoping, to dislike <i>The Force Awakens</i>.</p>
<p>Entering the theater a cynical, somewhat bitter middle-aged man, I fully expected to be able to take my distaste for the other work of J. J. Abrams (particularly, his atrocious 2009 <i>Star Trek</i> reboot), and Disney, and recycled nostalgia in general, and throw it directly at the screen.</p>
<p>An original fan of <i>Star Wars</i> -- I saw the first one perhaps a dozen times in the theater -- and pretty much agree with the critical consensus about the prequels. Their utter failure led me to believe that the things I loved most about Episode IV had, for the most part, little to do with big-budget filmmaking, but were the result of giving a bunch of really brilliant costume and set designers and cinematographers and editors and sound designers a lot of creative control and a relatively low budget -- a situation unlikely to be replicated in a truly big film, an important <i>investment</i> where none of the investing parties would want to take any significant risks.</p>
<p>I was wrong, and I'm still somewhat troubled by that. Is <i>The Force Awakens</i> a good movie, or was I just primed by my age and the bad experience with the prequels to suck up something relatively bad and call it good, simply because it lacks the awfulness of the prequels, and smells a lot like the 1977 original? I don't think I can actually answer that question, definitively, at least not easily because really taking that up requires me to think critically about the original 1977 <i>Star Wars</i>, something I find hard to do, given the way the film imprinted itself upon my nine-year-old brain. Is it really all that and a bag of chips? Or did it just land at the right time to be the formative movie of my childhood?</p>
<p>One of my sons is nine, by the way. He enjoyed the new movie, but I don't think it <i>blew his mind</i> the way the original <i>Star Wars</i> blew mine, simply because we have, ever since 1977, lived in the era that had <i>Star Wars</i> in it.</p>
<p>To be clear -- it's not the case that there weren't big action movies back then, and big science fiction movies back then. We had movies like <i>2001: a Space Odyssey</i>, which also formed my tastes. We had <i>Silent Running</i>. We had <i>Logan's Run</i>. But it would be impossible to overstate the shock wave of <i>Star Wars</i> -- the innovative effects, editing, and yes, even marketing. We just can't go back to that world. He's seen a lot of things that are Star Wars-ish, while in 1977, I never had.</p>
<p>And make no mistake, the new <i>Star Wars</i> is, most definitely, <i>Star Wars</i>-ish, in the way that the prequels were not. The world of the prequels was too clean, to sterile, too political, and too comic. <i>Star Wars</i> may have been the single most successful blending of genres ever attempted; a recent <a href="http://www.slate.com/articles/arts/cover_story/2015/12/star_wars_is_a_pastiche_how_george_lucas_combined_flash_gordon_westerns.single.html">article</a> called it "postmodern," and I think that is correct. The prequels might have been attempts at post-modern, too, but they seem to have a different set of influences, and just seem, in every respect, to have been assembled lazily, and without artfulness. For just one example, see how one of the prequel lightsaber battle scenes <a href="http://petapixel.com/2015/12/23/this-is-a-star-wars-action-scene-in-the-age-of-cgi/">was actually filmed</a>.</p>
<p><i>The Force Awakens</i> follows the 1977 formula so closely that it is perilously close to coming across as a kind of remake or pastiche of the original. But it is not that. It is actually an homage to the original. There are a lot of parallel details and actual "easter eggs," where props make cameos, audio clips from the original movies are sprinkled into the new one. In one of my favorite moments, on Starkiller base we hear a clip from the first movie, "we think they may be splitting up." Some reviewers have made their reviews catalogs of these moments, and consider this excessive, complaining about the "nostalgia overload." But although it is noticeable, I think the producers knew just how much nostalgia would be appreciated, and how much would become annoying, and walked that line very well. The film re-creates the world where Han Solo and Leia Organa will not look out of place. And when Harrison Ford and Carrie Fisher actually appear on screen, the weight of almost 40 years suddenly lands on me, and it's a gut punch. I must have gotten something in my eye.</p>
<p>While Solo is an important character in this film, Leia has very few scenes, and the action centers largely around new characters. The casting is what you might call modern. No one could claim that Rey is not a strong, compelling female character. Daisy Ridley's acting in this movie is very impressive. Without her strong performance, we'd be prone to spend time musing on the oddness of her almost-absent back-story. As it is, we aren't really given a lot of time to meditate on such things, because she keeps very busy, kicking ass and taking names.</p>
<p>John Boyega as Finn is good too, although he doesn't seem, to me, to quite <i>inhabit</i> his character the way Ridley inhabits Rey. And so I find myself spending a little time wondering about the gaps and inconsistencies in his character's back-story. He describes himself as a sanitation worker. If that's true, why is he on the landing craft in the movie's opening scenes, part of the First Order interplanetary SWAT team sent to Jakku to retrieve the MacGuffin? He's supposedly a low-level worker on Starkiller base, but he knows how to disable the shields? He's a stormtrooper, trained since birth to kill, but unable to kill. Has he never been "blooded" before? We're unfortunately reminded that this doesn't actually make a lot of sense. Of course this is true of many elements of the original trilogy. The key to making that kind of thing not matter, for a Star Wars movie, is to keep everything happening so fast that the audience doesn't have time to worry about all that.</p>
<p>The story moves along quickly and we meet one of the most interesting characters, Kylo Ren, played by Adam Driver. Driver plays an adolescent, and puts Hayden Christiansen's portrayal of Anakin to utter shame -- although one senses that much of Christiansen's failure may have been due to Lucas's poor direction of the young actor. Driver is completely compelling on-screen, and his scenes with Ridley are just mesmerizing. I really can't say enough good things about them. I've seen two screenings now, and I would happily see it again, just to watch those two characters interact. It's really impressive.</p>
<p>That's really enough to hang a movie on -- a few really great performances, a few good performances, some terrific scenes, and no scenes that are actually bad. (Howard Hawks famously said that to make a good movie, you needed three good scenes and no bad ones; <i>The Force Awakens</i> exceeds that requirement).</p>
<p>Of course, there are a lot of confusing, unconvincing, and unwieldy things about this film. For example, Rey is much stronger in the ways of the force, and a very powerful fighter, right off the bat. She's grown up on Jakku, apparently spending years alone, and entirely untrained, while in the original trilogy we watched Luke start off with some talent for using the Force, but not much skill, and get trained up like Rocky Balboa. How did this come to pass? Well, it's a mystery we just have to accept for now. Maybe she had a lot of karate classes as a very young child. I maintain that when a movie likes this leaves things unexplained, the audience will do the work for the screenwriters and make it work -- <i>if</i> the audience has decided to side with the movie and help it along. And if they haven't, no amount of rationalization will explain away the inevitable plot holes in a satisfying way. This movie has done such a good job at entertaining the audience, and introducing a compelling character early on, that we as the audience are pretty happy to go along, and willing to make a few allowances and give it the benefit of the doubt. With the prequels, we were bored and full of doubt, for good reasons.</p>
<p>There are a few flaws that I think are worth noting. The movie is just slightly too long. The reawakening of Artoo-Detoo, just <i>after</i> the destruction of the big bad Starkiller base, allowing the plot to continue with a literal deus ex machina -- is just slightly too silly.</p>
<p>What is up with Kylo Ren's helmet, and Captain Phasma's helmet? One of the notable things about the Empire was the extreme precision and cleanliness of the costumes, including the stormtrooper helmets and Darth Vader's helmets. But in the new movie, Ren's helmet is dinged and dented, with chipped paint, and Phasma's helmet is covered in fingerprints. It's not accidental; even the action figures of Kylo Ren have molded-in dents, and there is no way that someone simply forgot to polish Phasma's helmet; such an error would certainly be caught. They were made to look that way deliberately, in stark contrast to the other uniforms and suits of armor. Why is that?</p>
<p>There are some scenes with the Resistance, preparing X-wing fighters, that look like they were literally shot on the site of a freeway overpass; that reminded me of the way J. J. Abrams decided it was a good idea to use a <a href="http://brookstonbeerbulletin.com/star-treks-engineering-deck-brewery/">brewery</a> for the engine room of the Enterprise -- an incredibly dumb, unconvincing, revisionist look for the Engineering set. The Imperial wreckage on Jakku -- both Imperial Star Destroyers <i>and</i> the walkers from the invasion of Hoth in <i>Empire</i> -- is nostalgic, but bizarre.</p>
<p>There are some coincidences that feel just a little too coincidental. How did Luke's lightsaber wind up in Maz's basement, in an unlocked trunk, in an otherwise empty room?</p>
<p>Starkiller Base makes very little sense; the physics of it just don't work, in any reasonable universe. The Resistance leaders explain that it sucks up "<i>the</i> sun" -- not "the nearest star" -- in a galaxy with billions of suns, in a film set on multiple planets, around multiple stars, the producers apparently don't trust the audience to understand how stars and planets work; don't confuse them!</p>
<p>But none of this is really a deal-breaker, because the movie moves so fast, and is so willing to break things. Which brings me to the biggest spoiler of all.</p>
<p>The movie kill Han Solo. Yes, they went there. It was at that moment that the film won me over completely. It was a brave move, and it needed to happen. The screenwriters, including Lawrence Kasdan, who worked on <i>Empire</i>, knew very well that if the audience was to take this movie seriously, it would need to show them that it was serious. That's what the death of Han Solo means. Harrison Ford -- who, by the way, is excellent in this film -- has a terrific death. This is also the reason that, for episode IX to work, the screenwriters will have to kill another major character -- most likely, General Leia -- in the first ten minutes.</p>
<p>Given the impressive start to that trilogy, I believe they will do the right thing -- and it will be glorious. And we'll regard the prequels as an unfortunate, non-canon interlude, a mere glitch, in the continuity of the Star Wars story -- and Lucas will continue his slide into irrelevant <a href="http://www.theguardian.com/film/2016/jan/01/george-lucas-apologises-for-describing-disney-as-white-slavers">lunacy</a>.</p>
<p>And meanwhile, as I approach fifty, I still have to wonder. What was the point of Star Wars? Was it ever anything resembling a genuine artistic statement, or was it always a coldly calculated money-grabbing machine, powered by <a href="https://en.wikipedia.org/wiki/The_Power_of_Myth">myth</a>, in which Lucas figured out how to monetize the scholarship of <a href="https://en.wikipedia.org/wiki/Joseph_Campbell">Joseph Campbell</a>? Was <i>Star Wars</i> ever actually <i>about</i> anything? Was it "real" art, more than a dizzying whirlwind of entertainment, built on genre tropes and with very little in it that was groundbreaking but the improved technology of movie-making?</p>
<p>Was I simply bamboozled, as a child, into imagining that I was seeing a piece of art, something meaningful? If so, does it matter? Is that dizzying whirlwind of entertainment, blended with a calculated human story arc, really enough? Can real art ever be made out of genre fiction? How about Tolkien? What about smashed-together, postmodern genre fiction? Is it just screenwriting that somehow loses the status of "art?" If I enjoy both <i>Moby Dick</i> and <i>Star Wars</i>, is there something wrong with me?</p>
<p>And, if these distinctions don't matter, and the Disney corporation buys George Lucas's property for four <i>billion</i> dollars, knowing they will turn enormous profits on that investment for decades, and makes us a compelling <i>Star Wars</i> entirely cynically, built literally out of the formulaic building blocks of the original, but it works as well, as wonderfully -- distractingly, entertainingly, wonderfully -- as the original, does that matter? And what does it say about art, and about its audience?</p>Paul R. Pottshttp://www.blogger.com/profile/04401509483200614806noreply@blogger.com0tag:blogger.com,1999:blog-21054185.post-24120844315072751482015-11-14T20:17:00.004-05:002021-01-27T15:16:48.663-05:00Working with a Thermistor: a C Programming Example<p>Recently I worked on a project that needed to monitor temperature using a thermistor. A thermistor is a resistor that measures temperature: the resistance changes depending on how hot it is. They are used in all kinds of electronic devices to monitor temperature and keep components from oveheating.</p>
<p>I searched for a good, simple worked-out example for how to get an accurate reading from the thermistor, but had trouble finding something readable. I am not actually an electrical engineer and have never studied the math behind thermistors formally, but I was able to adapt the formulas from some existing sources, with a little help. I am sharing this example in the hope that it might be useful to someone trying to solve a similar problem. Please note the way I have adapted the <i>general</i> thermistor math to our particular part and circuit; unless you have an identical part and measurement circuit, you will probably not be able to use this example exactly "as-is."</p>
<p>As I understand it, most modern thermistor components are "NTC," which means that they have a "negative temperature coefficient," meaning that their resistance has an inverse relationship to temperature: higher temperature, lower resistance. Thermistors have highly non-linear response, and are usually characterized by the <a href="https://en.wikipedia.org/wiki/Thermistor#Steinhart.E2.80.93Hart_equation">Steinhart-Hart equation</a>. This equation is a general equation that can be parameterized to model the response curve associated with a specific thermistor device. The original form of the equation takes three coefficients, A, B, and C, and describes the relationship between thermistor resistance and temperature in degrees Kelvin (K). It turns out that the three-coefficient form is overkill for a lot of parts and their response curve can be characterized accurately with a single parameter, using a simplified version of the equation This single parameter is called "beta" and so the equation can be called the <a href="https://en.wikipedia.org/wiki/Thermistor#B_or_.CE.B2_parameter_equation">Beta Parameter Equation</a>.</p>
<p>Reading a thermistor is complicated by the fact that in a typical application we are first using resistance as a measurment of, or proxy for, temperature; that's the basic thing a thermistor does. But in a circuit we don't read resistance directly; instead, we would typically read voltage as a measure of, or proxy for, resistance. To read the resistance from a thermistor we treat it like we would treat a variable resistor, aka potentiometer. We use a <a href="https://en.wikipedia.org/wiki/Voltage_divider">voltage divider</a>. This consists of two resistors in series. In our case we place the thermistor after a fixed resistor, and tap the voltage in between. This goes to an ADC - and analog-to-digital converter. I'm going to assume that you already have a reasonably accurate ADC and working code to take a reading from it.</p>
<p>So now I'm going to describe how I took the general thermistor math and adapted it for a specific part and circuit. Our specific thermistor is a Murata NCP18XH103F03RB. So you can Google the vendor and part number and find a datasheet. You need to find out a few things from the datasheet, specifically the nominal resistance at the reference temperature, which is usually 25 degrees Celsius, or 298.15K (or if it is not, note the reference voltage). Also, the datasheet should specify the beta value for your part; in our case, it is 3380.</p>
<p>The beta parameter equation, solved for resistance, reads:</p>
<pre>Rt = R0 * e^( -B * ( 1 / T0 - 1 / T ) )</pre>
<p>Where Rt is the resistance as a proxy for temperature, e is the mathematical constant e, B is beta, T0 is the reference temperature in K, and T is the measured temperature in degrees Kelvin. We want temperature given resistance, so we can solve it for temperature, like so:</p>
<pre>T = B / ln( R / ( R0 * e^( -B / T0 ) ) )</pre>
<p>Plugging in our R0 = 10,000 ohms, B = 3380, and T0 = 298.15 K we get:</p>
<pre>t = 10000 * e^( -3380 * ( 1 / 298.15 - 1 / T ) )</pre>
<p>or</p>
<pre>T = 3380 / ln( R / ( 10000 * e^( -3380 / 298.15 ) ) )</pre>
<p>Now, we need to have something to plug in for R, given the fact that we're reading a voltage from a voltage divider. In our case, the fixed resistor in our voltage divider has the same resistance value in ohms as the nominal resistance for our thermistor at 25 C, 10 kohms (10,000 ohms). Our voltage going into the voltage divider is 2.5V. The standard formula for a voltage divider like this, arranged with the fixed resistor first in the series, before the thermistor, is:</p>
<pre>V = 2.5 * ( R / ( 10000 + R ) )</pre>
<p>If your thermistor comes before the fixed resistor, you will want to swap the two R values values (see the Wikipedia article on voltage dividers I mentioned above). To get resistance given voltage, we can solve the above for R:</p>
<pre>R = 20000 * v / ( 5 - 2 * v )</pre>
<p>Now we've got a formula that we can use to convert a voltage reading to a thermistor resistance reading R. We can actually plug the right hand side of that right into our beta parameter equation from above, replacing R:</p>
<pre>T = 3380 / ln( ( 20000 * v / ( 5 - 2 * v ) ) / ( 10000 * e^( -3380 / 298.15 ) ) )</pre>
<p>That looks kind of monstrous; it really seems like this ought to be simpler than that. But Wolfram Alpha could simplify it when my own algebra skills gave out. You can just go to the <a href="http://www.wolframalpha.com">Wolfram Alpha</a> site and paste in that equation, being careful to get the parentheses in the right place. You will want to change the T to a t so that Wolfram Alpha interprets it as a variable, rather than Tesla units. Here's the <a href="http://www.wolframalpha.com/input/?i=t+%3D+3380+%2F+ln%28+%28+20000+*+v+%2F+%28+5+-+2+*+v+%29+%29+%2F+%28+10000+*+e%5E%28+-3380+%2F+298.15+%29+%29+%29">result</a>. Note that Wolfram Alpha has provided a nicely simplified version of the equation, perfect for our needs:<p>
<pre>t = 3380 / ln( 167665 * v / ( 5 - 2 * v ) )</pre>
<p>That equation describes temperature, in K, as a function of the measured voltage from our specific thermistor and voltage divider circuit. Again, keep in mind that unless you have an identical part and circuit, you will not be able to use this formula as-is. Testing against a thermocouple and hand-held infrared thermometer suggests, so far, that our temperature readings seemed to be accurate to within a degree C. We have not tested it with extremely high or low temperatures yet, but I expect it to be reasonably accurate; for this application, which involves setting fan speed and determining if we need to shut down components, we don't need a high degree of accuracy.</p>
<p>Finally, remember that the results of this formula are in degrees Kelvin. A C programming language expression for converting degrees Kelvin to degrees Celsius is simply:</p>
<pre>k - 273.15F</pre>
<p>where k is a floating-point value representing degrees Kelvin. Similarly, you can convert to degrees Fahrenheit like so:</p>
<pre>k * 9.0F / 5.0F - 459.67F</pre>
<p>and the C expression to implement our voltage-to-temperature function is:</p>
<pre>3380.0F / log( ( 167665.0F * v ) / ( 5.0F - ( 2.0F * v ) ) )</pre>
<p>where v is a floating-point value representing the voltage from our voltage divider, and <i>log</i> is the C programming language's natural logarithm function, part of the C standard library of math functions.</p>
<p>Please take care when using expressions like this. Note that it contains floating-point division operations. You must check to make sure that the values you are dividing by are non-zero! If not, the result will be a floating-point "not-a-number" value (NaN). Depending on your platform, this may happen silently, without any sort of runtime error, and the result will then "poison" any downstream calculations carried out with this value.</p>
<p>I hope this has been helpful. Please leave a comment (note: comments are moderated, so you will not see them appear immediately) if you were able to adapt this approach to your project! If you have a question, you can leave a comment too, although keep in mind that I'm not actually an electrical engineer and so would be better at answering programming questions than circuit design questions. Happy measuring!</p>Paul R. Pottshttp://www.blogger.com/profile/04401509483200614806noreply@blogger.com0tag:blogger.com,1999:blog-21054185.post-69571525126148031472015-02-05T05:11:00.003-05:002015-02-05T05:18:24.143-05:00A Deep Dive: the Velocity Manufacturing SimulationIn 1989 I graduated from the College of Wooster and then spent a year as an intern with Academic Computing Services there, writing newsletters and little software tools. In the summer of 1990 I moved to Ann Arbor, without a clear idea what I was going to do next.<br />
<br />
I worked for a short while with the Department of Anthropology, but by the end of 1990, I had found a job with the Office of Instructional Technology.<br />
<br />
OIT was sort of the University's answer to the MIT Media Lab. It was an organization where instructional designers, programmers, and faculty members could work together on projects to bring technology into classrooms. It was a pretty remarkable workplace, and although it is long gone, I am truly grateful for the varied experiences I had there. It was the early days of computer multimedia, a sort of wild west of platforms and tools, and I learned a lot.<br />
<br />
In January of 1993 my girlfriend and her parents visited my two workplaces, OIT headquarters and the Instructional Technology Lab, a site in the Chemistry building. I handed my girlfriend a video camera and proceeded to give a very boring little talk to her, and her extremely patient parents. Wow, I was a geek. I'd like to think my social skills and ability to make eye contact are a lot better now, but I probably haven't changed as much as I imagine that I have. I'm an extraverted geek now: when I am having a conversation with you, I can stare at <i>your</i> shoes.<br />
<br />
I have carried the original analog Hi-8 videocassette around through many moves, and life changes, and only today figured out a good way to get it into my computer -- after giving the camcorder heads a very thorough cleaning. I thought the tape was pretty much a lost cause, and was going to try working with my last-ditch backup, a dub to VHS tape, but I'm pleased to learn that the video is still playable, and pleased that I could finally get this made, such as it is.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<iframe allowfullscreen="" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/1MxN364YayU/0.jpg" frameborder="0" height="266" src="http://www.youtube.com/embed/1MxN364YayU?feature=player_embedded" width="320"></iframe></div>
<br />
<br />
This project, the Velocity Manufacturing Simulation, was written in Visual BASIC, long before it became VB.NET. I remember that it involved a fair amount of code, although I don't have the source to look at. I remember painstakingly writing code for GUI elements like the animated disclosure triangles. There was some kind of custom controls library we bought separately; the details escape me. There was some kind of ODBC (maybe?) database plug-in that I can barely recall; I think Pete did most of the work on that part. Pete wrote parts of it, and I wrote parts of it. Now it seems almost laughably primitive, but you'll just have to take my word for it that back in the day it seemed pretty cool. It won an award. As far as I know, this is the only video footage of the project.<br />
<br />
The code is 147 years old in Internet years. It was almost half my lifetime ago. But at the same time it seems like I just left that office, and somehow if I could figure out where it was, I could still go back and find everyone there in the conference room having lunch, and after lunch settle back into my old office with the vintage, antique computers.<br />
<br />
This was only one of several projects I worked on while I worked at OIT. I have some other bits of video for a few of them, but not all. I will get clips up for at least one more. I wish there was more tape, and better tape, even if the only one nostalgic about these projects is me.<br />
<br />
Perhaps "enjoy" is the wrong word, but take a moment to remember what instructional multimedia was like, a few months before a group called NCSA released a program called Mosaic and the world started to hear about this exciting new thing called the World Wide Web... but grandpa's tired, kids, and that's a story for a different day.Paul R. Pottshttp://www.blogger.com/profile/04401509483200614806noreply@blogger.com0tag:blogger.com,1999:blog-21054185.post-81628492218732191872013-11-13T18:40:00.001-05:002013-11-14T09:08:19.002-05:00Apple Breaks Apache Configurations for Gitit (Again)<p>I'm not quite sure why I put myself through this, but I upgraded my Mac Pro to Mavericks. This broke my local Gitit Wiki. The symptom was that Apache was unable to start, although nothing would be written in the error logs. To determine what was wrong I used <b>sudo apachectl -t</b>. The installer did preserve my http.conf, but wiped out the library <b>mod_proxy_html.so</b> that I had installed in <b>/user/libexec/apache2</b>. See this old entry that I wrote back when I fixed it for Mountain Lion <a href="http://geeklikemetoo.blogspot.com/2012/07/apple-breaks-my-gitit-wiki-under.html">here</a>.</p>
<p>I installed XCode 5 and I thought I was set, but there is more breakage. You might need to run <b>xcode-select --install</b> to get headers in <b>/usr/include</b>. The makefile <b>/usr/share/httpd/build/config_vars.mk</b> is still broken in Mavericks, so commands like <b>sudo apxs -ci -I /usr/include/libxml2 mod_xml2enc.c</b> won't work.</p>
<p>To make a long story short, I got the latest (development) version of the mod_proxy_html source, these commands worked for me:</p>
<p><pre>sudo /usr/share/apr-1/build-1/libtool --silent --mode=compile --tag=CC /usr/bin/cc -DDARW
IN -DSIGPROCMASK_SETS_THREAD_MASK -I/usr/local/include -I/usr/include/apache2 -I/usr/include/apr-1 -I/usr/include/libxml2 -I
. -c -o mod_xml2enc.lo mod_xml2enc.c && sudo touch mod_xml2enc.slo</pre></p>
<p>and</p>
<p><pre>sudo /usr/share/apr-1/build-1/libtool --silent --mode=compile --tag=CC /usr/bin/cc -DDARW
IN -DSIGPROCMASK_SETS_THREAD_MASK -I/usr/local/include -I/usr/include/apache2 -I/usr/include/apr-1 -I/usr/include/libxml2 -I
. -c -o mod_proxy_html.lo mod_proxy_html.c && sudo touch mod_proxy_html.slo</pre></p>
<p>Previously, this gave me <b>.so</b> files in the generated <b>.libs</b> directory, but now I just have .o files and I'm not sure that's what I want.</p>
Paul R. Pottshttp://www.blogger.com/profile/04401509483200614806noreply@blogger.com3tag:blogger.com,1999:blog-21054185.post-38150011680599025812013-08-11T15:13:00.000-04:002013-08-11T15:18:12.079-04:00More Crappy Print-on-Demand Books -- for Shame, Addison-Wesley "Professional"<p>So, a while back I wrote about some print-on-demand editions that didn't live up to my expectations, particularly in the area of print quality -- <a href="http://geeklikemetoo.blogspot.com/2013/02/tor-print-on-demand-editions-not.html">these</a> Tor print-on-demand editions.</p>
<p>Now, I've come across one that is even worse. A few days ago I ordered a book from Amazon called <i>Imperfect C++</i> by Matthew Wilson -- it's useful, thought-provoking material. Like the famous UNIX-Hater's Book, it's written for people with a love-hate relationship with the language -- that is, those who have to use it, and who desperately want to get the best possible outcomes from using it, writing code that is as solid and portable as possible, and working around the language's many weaknesses. (People who haven't use other languages may not even be aware that something better is possible and that complaints about the language are just sour grapes; I'm not really talking to those people).</p>
<p>The universe sometimes insists on irony. My first copy of <i>Imperfect C++</i> arrived very poorly glued; the pages began falling out as soon as I opened the cover and began to read. And I am not hard on books -- I take excellent care of them.</p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-yqk0OqRHRPQ/UgfhBn1BEtI/AAAAAAAADME/KaWrDDjlsTQ/s1600/imperfect_cpp_copy_1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-yqk0OqRHRPQ/UgfhBn1BEtI/AAAAAAAADME/KaWrDDjlsTQ/s400/imperfect_cpp_copy_1.jpg" /></a></div>
<p>So I got online and arranged to return this copy to Amazon. They cross-shipped me a replacement. The replacement is even worse:</p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-9uXDy00Zwwc/UgfhHiyZitI/AAAAAAAADMU/zAZozoM5T7A/s1600/imperfect_cpp_second_copy.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-9uXDy00Zwwc/UgfhHiyZitI/AAAAAAAADMU/zAZozoM5T7A/s400/imperfect_cpp_second_copy.jpg" /></a></div>
<p>Not only are the pages falling out, because they were not properly glued, but the back of the book had a big crease:</p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-AMEtyfKvWIY/UgfhC7edUZI/AAAAAAAADMM/Z-zw7Ho0z0U/s1600/imperfect_cpp_copy_2_back.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-AMEtyfKvWIY/UgfhC7edUZI/AAAAAAAADMM/Z-zw7Ho0z0U/s400/imperfect_cpp_copy_2_back.jpg" /></a></div>
<p>So I guess I'll have to return both.</p>
<div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-oOFmDBeiNd8/UgfhKLN7TII/AAAAAAAADMc/dQmERpLF0_g/s1600/imperfect_cpp_two_copies.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-oOFmDBeiNd8/UgfhKLN7TII/AAAAAAAADMc/dQmERpLF0_g/s400/imperfect_cpp_two_copies.jpg" /></a></div>
<p>I'll look into finding an older used copy that wasn't print-on-demand. But then of course the author won't get any money.</p>
<p>Amazon, and Addison-Wesley, this is shameful. This book costs $50, even with an Amazon discount. I will be sending a note to the author. I'm not sure there is much he can do, but readers should not tolerate garbage like this. Amazon, and Addison-Wesley, fix this! As Amazon approaches total market dominance, I'm reminded of the old Saturday Night Live parody of Bell Telephone: "We don't care. We don't have to. We're the Book Company."</p>Paul R. Pottshttp://www.blogger.com/profile/04401509483200614806noreply@blogger.com0tag:blogger.com,1999:blog-21054185.post-40862029197387820872013-08-01T13:08:00.001-04:002013-08-11T22:28:48.976-04:00Arduino, Day 1<p>A friend of mine sent me a <a href="https://www.sparkfun.com/products/11575">RedBoard</a> and asked me to collaborate with him on a development idea. So I'm playing with an Arduino-compatible device for the first time. I've been aware of them, but just never got one, in part because after writing embedded code all day, what I've wanted to do with my time off is not necessarily write more embedded code.</p>
<p>I downloaded the Arduino IDE and checked that out a bit. There are some things about the way it's presented that drive me a little batty. The language is C++, but Arduino calls it the "Arduino Programming Language" -- it even has its own language reference page. Down at the bottom the fine print says "The Arduino language is based on C/C++."</p>
<p>That repels me. First, it seems to give the Arduino team credit for creating something that they really haven't. They deserve plenty of credit -- not least for building a very useful library -- but not for inventing a programming language. Second, it fails to give credit (and blame) for the language to the large number of people who actually designed and implemented C, C++, and the GCC cross-compiler running behind the scenes, with its reduced standard libraries and all. And third, it obfuscates what programmers are learning -- especially the distinction between a <i>language</i> and a <i>library</i>. That might keep things simpler for beginners but this is supposed to be a teaching tool, isn't it? I don't think it's a good idea to obfuscate the difference between the core language (for example, bitwise and arithmetic operators), macros (like <b>min</b>), and functions in the standard Arduino library. For one thing, errors in using each of these will result in profoundly different kinds of diagnostic messages or other failure modes. It also obfuscates something important -- which C++ is this? Because C++ has many variations now. Can I use enum classes or other C++11 features? I don't know, and because of the facade that Arduino is a distinct language, it is harder to find out. They even have the gall to list <b>true</b> and <b>false</b> as constants. If there's one thing C and C++ programmers know, and beginners need to learn quickly, it's that logical truth in C and C++ is messy. I would hate to have to explain to a beginner why testing a masked bit that is not equal to one against <b>true</b> does not give the expected result.</p>
<p>Anyway, all that aside, this is C++ where the IDE does a few hidden things for you when you compile your code. It inserts a standard header, Arduino.h. It links you to a standard <b>main()</b>. I guess that's all helpful. But finally, it generates prototypes for your functions. That implies a parsing stage, via a separate tool that is not a C++ compiler.</p>
<p>On my Mac Pro running Mountain Lion, the board was not recognized as a serial device at all, so I had to give up using my Mac, at least until I can resolve that. I switched over to Ubuntu 12.04 on a ThinkPad laptop. The IDE works flawlessly. I tried to follow some directions to see where the code was actually built by engaging a verbose mode for compilation and uploading, but I couldn't get that working. So I ditched the IDE.</p>
<p>This was fairly easy, with the caveat that there are a bunch of outdated tools out there. I went down some dead ends and rabbit holes, but the procedure is really not hard. I used <b>sudo apt-get install</b> to install <b>arduino-core</b> and <b>arduino-mk</b>.</p>
<p>There is now a common <b>Arduino.mk</b> makefile in my <b>/usr/share/arduino</b> directory and I can make project folders with makefiles that refer to it. To make this work I had to add a new export to my <b>.bashrc</b> file, <b>export ARDUINO_DIR=/usr/share/arduino</b> (your mileage may vary depending on how your Linux version works, but that's where I define additional environment variables).</p>
<p>The Makefile in my project directory has the following in it:</p>
<pre>BOARD_TAG = uno
ARDUINO_PORT = /dev/serial/by-id/usb-*
include /usr/share/arduino/Arduino.mk</pre>
And nothing else! Everything else is inherited from the common Arduino.mk. I can throw <b>.cpp</b> and <b>.h</b> files in there and <b>make</b> builds them and <b>make upload</b> uploads them.</p>
<p>If you have trouble with the upload, you might take a look at your devices. A little experimentation (listing the contents of <b>/dev</b> before and after unpluging the board) reveals that the RedBoard is showing up on my system as a device under <b>/dev/serial</b> -- in my case, <b>/dev/serial/by-id/usb-FTDI_FT232R_USB_UART_A601EGHT-if00-port0</b> and <b>/dev/serial/by-path/pci-0000:00:1d.0-usb-0:2:1.0-port0</b> (your values will no doubt vary). That's why my <b>Makefile</b> reads <b>ARDUINO_PORT = /dev/serial/by-id/usb-*</b> -- so it will catch anything that shows up in there with the <b>usb-</b> prefix. If your device is showing up elsewhere, or you have more than one device, you might need to tweak this to properly identify your board.</p>
<p>When you look at the basic blink demo program in the Arduino IDE, you see this, the contents of an <b>.ino</b> file (I have removed some comments):</p>
<pre>int led = 13;
void setup() {
// initialize the digital pin as an output.
pinMode(led, OUTPUT);
}
// the loop routine runs over and over again forever:
void loop() {
digitalWrite(led, HIGH); // turn the LED on (HIGH is the voltage level)
delay(1000); // wait for a second
digitalWrite(led, LOW); // turn the LED off by making the voltage LOW
delay(1000); // wait for a second
}</pre>
<p>The Makefile knows how to build an <b>.ino</b> file and inserts the necessary header, implementation of <b>main</b>, and generates any necessary prototypes. But if you want to build this code with <b>make</b> as a <b>.cpp</b> file, it needs to look like this:</p>
<pre>#include <Arduino.h>
int led = 13;
void setup() {
// initialize the digital pin as an output.
pinMode(led, OUTPUT);
}
// the loop routine runs over and over again forever:
void loop() {
digitalWrite(led, HIGH); // turn the LED on (HIGH is the voltage level)
delay(1000); // wait for a second
digitalWrite(led, LOW); // turn the LED off by making the voltage LOW
delay(1000); // wait for a second
}
int main(void)
{
init();
#if defined(USBCON)
USBDevice.attach();
#endif
setup();
for (;;) {
loop();
if (serialEventRun) serialEventRun();
}
return 0;
}</pre>
<p>And there it is -- C++, <b>make</b>, and no IDE. Relaxen and watchen <a href="http://en.wikipedia.org/wiki/Blinkenlights">Das blinkenlights</a>!</p>Paul R. Pottshttp://www.blogger.com/profile/04401509483200614806noreply@blogger.com0tag:blogger.com,1999:blog-21054185.post-92229888720380864112013-07-30T19:27:00.000-04:002013-08-20T14:44:35.667-04:00Lexx is Wretched<p>I have a fondness for science fiction series that are imaginative but not, as a whole, successful. <i>Farscape</i>, I'm talking about you. Even, occasionally, those that start out promising, but which turn into complete failures -- failure can occasionally be interesting. At least, it serves as an object lesson for how a story line can go so very far wrong. <i>Andromeda</i>, I've got your number. I can deal with very dated CGI -- <i>Babylon Five</i> is still generally good and often great. So I happened to come across discounted boxed sets of <i>Lexx</i>, the whole series, at my local Target store. They were dirt cheap. "How bad could it be?" I thought. Well, now I know. At least, I know part of the story.</p>
<p>First off, <i>Lexx</i> is not something I can show my kids -- pretty much at all. Season 1 has a surprising amount of very fake gore in it -- brains and guts flying everywhere. That didn't really bother them -- I think they got that the brains were made of gelatin -- but it was getting to me. Watching characters carved up by rotating blades, repeatedly; watching characters getting their brains removed -- that got old. Body horror, body transformation -- pretty standard stuff for B grade science fiction, or anything that partakes of the tropes of such, but not actually kid-friendly. So we didn't continue showing the kids.</p>
<p>Still, I thought it might make more sense to watch them in order, so I watched the second two-hour movie (1:38 without commercials). The second one has full frontal nudity, which startled me a bit. I'm not really opposed to looking at a nubile young woman, <i>per se</i>. There is some imaginative world-building and character creation here, but ultimately it's just incredibly boring. It's like the producers shot the material, not having any idea how long the finished product would be; they shot enough scenes to actually power an hour show (forty-plus minutes without commercials), but also shot a bunch of extended padding sequences, "just in case." And so after a repeated intro that lasts just under four minutes, we get a two-hour show with endless cuts to spinning blades slowly approaching female groins, huge needles slowly approaching male groins, countdown timers counting down, getting stopped, getting started, getting stopped... endless fight scenes, endless scenes of the robot head blathering his love poetry, a ridiculous new character eating fistfuls of brains... et cetera, et cetera, et cetera.</p>
<p>Every time something happens, I'd get my hopes up, thinking that maybe the writing has actually improved, but then it's time to slow down the show again, because we've still got an extra hour and twenty minutes to pad. And it's all distressingly sexist and grotesquely homophobic. Again, I'd be lying if I said that I didn't like to look at Eva Habermann in a miniskirt, but given that the actress is <i>actually</i> young enough to be my daughter, and especially given that she has so little interesting to <i>do</i>, and there's just not much <i>character</i> in her character -- it's -- well, "gratuitous" doesn't even begin to cover it. She's young, but Brian Downey was old enough to know better. And let's just say I'm a little disgusted with the choices the show's producers made. The guest stars in Season 1 are like a who-used-to-be-who of B actors -- Tim Curry, Rutger Hauer, Malcom McDowell. There's material here for a great cult show -- but these episodes are mostly just tedious. They're actually not good enough to be cult classics.</p>
<p>The season consists of four two-hour movies. After watching the first movie, I didn't quite realize all four season one movies were on one disc, so when I tried to watch some more, I put in the first disc of season two by mistake. I watched the first few episodes of season two -- these are shorter. I didn't notice any actual continuity issues. In other words, nothing significant changes from the pilot movie to the start of season two. There are some imaginative satirical elements. Season 2, episode 3 introduces a planet called "Potatohoe" which is a pretty funny satire of the American "right stuff" tropes. But it's too little, and it amounts to too little, amidst the tedious general adolescent sex romp. Then we lose Eva Habermann, who was 90% of the reason I even watched the show this far. I'm honestly not sure if I can watch any more.</p>
<p>It doesn't help that several of the discs skip a lot. It might have something to do with the scratches that were on the discs when I took them out of the packaging, which come from the fact that the discs are all stuck together on a single spindle in the plastic box. And the discs themselves are all unmarked, identifiable only by an ID number, not any kind of label indicating which part of which season they hold -- so good luck pulling out the one you want.</p>
<p>I'm told the later seasons have some very imaginative story lines. People say good things about the third season. It seems like the universe has a lot of potential. Is it worth continuing, or am I going to be in old <i>Battlestar Galactica</i>'s second season territory?</p>
<p>UPDATE: I have continued skimming the show. The scripts seem to get somewhat more interesting around season 2, episode 5, called "Lafftrak." It finally seems to take its darkness seriously enough to do something interesting with it, and not just devolve to pornographic settings. The pacing is still weak, but the shows start to feel as if they have a little bit of forward momentum. Of course, then in the next episode, we're back to Star Whores and torture pr0n...</p>
Paul R. Pottshttp://www.blogger.com/profile/04401509483200614806noreply@blogger.com4tag:blogger.com,1999:blog-21054185.post-20922073536814576042013-07-24T14:46:00.000-04:002013-07-24T14:46:33.360-04:00The Situation (Day 135)<p>So, it's day 135. This is either the last covered week (week 20) of unemployment benefits, or I have three more; I'm not quite sure. Without a new source of income, we will run out of money to cover mortgage payments either at the end of September or the end of October. We have burned through the money I withdrew from my 401K in March when I was laid off. I've been selling some possessions, guitars and music gear, but this is demoralizing, and not sustainable. We don't have much more that is worth selling.</p>
<p>I was fortunate to have a 401K to cash out, and to get the food and unemployment benefits I've gotten -- so far I have been able to pay every bill on time and my credit rating is, so far, completely unscathed. But winter is coming. And another son is coming -- Benjamin Merry Potts, most likely around the middle of October.</p>
<p>Emotionally, the situation is very confusing. On the one hand, I have several very promising job prospects, and I'm getting second phone interviews. But these are primarily for jobs where I'd have to relocate, and a small number of possible jobs that might allow me to work from home. This includes positions in Manhattan and Maine. We're coming to grips with the fact that we will most likely have to leave Saginaw. It's a well-worn path out of Saginaw. We were hoping to stick with the road less traveled, but we can't fight economic reality single-handed. And we don't really have any interest in relocating <i>within</i> Michigan, <i>again</i>. If we're going to have to move, let's move somewhere where we won't have to move again -- someplace where, if I lose one job, there's a good chance I can quickly find another.</p>
<p>So, we are willing to relocate, for the right job in the right place. The right place would be the New England area -- Grace is fed up here, and I am too. Maine, Vermont, New Hampshire, Massachusetts, Connecticut, New York, or eastern Pennsylvania are all appealing. but it would not be a quick and easy process. It would probably involve a long separation from my family. I don't relish that idea, especially if my wife has a new baby. That might be what it takes, though. I'll do it for the right job and the right salary and the right place. In any case, we can't move with either a very pregnant woman or a newborn. It's would not be a quick and easy process to sell, or even rent out, a house. A benefit to a permanent job in Manhattan is that it would pay a wage that is scaled for the cost of living there. It might be perfectly doable for me to find as cheap a living arrangement there as I can, work there, and send money home. A Manhattan salary would go a long way towards maintaining a household in Michigan, and helping us figure out how to relocate, and I'd probably be able to fly home fairly frequently.</p>
<p>I would consider a short-term remote contract job where I wasn't an employee, and didn't get benefits, and earned just an hourly wage. Let's say it was a four-hour drive away. I'd consider living away from home during the work week, staying in an extended-stay motel, and driving home on weekends. But it would have to pay well enough to be able to do that commute, pay for that hotel, and be able to send money home -- enough to pay the mortgage and bills. A per diem would help, but the contract work like this I've seen won't cover a per diem. We'd need to maintain two cars instead of one. Grace would need to hire some people for housekeeping and child care help. I wouldn't be there to spend the time I normally spend doing basic household chores and helping to take care of the kids.</p>
<p>Would I consider a contract job like that father away -- for example, an hourly job in California? That's tougher. I think I could tolerate seeing my wife and kids only on weekends, if I knew that situation would not continue indefinitely. But if I had to fly out, that probably wouldn't be possible. California has very little in the way of public transportation. Would I have to lease a car out there, so I could drive to a job? Take cabs? It might make more sense to buy a used car, once out there. In any case, it would cost. Paying for the flights, the hotel, and the car, with no per diem, it's hard to imagine that I'd be able to fly home even once a month. Would I do a job like that if I could only manage to see my family, say, quarterly? Let's just say that would be a hardship. I would consider an arrangement like this if it paid enough. But the recruiters who are talking to me about these jobs are not offering competitive market rates. It doesn't seem like the numbers could work out -- I can't take a job that won't actually pay all our expenses.</p>
<p>The prospect of employment locally or within an hour commute continues to look very poor. I've applied for a number of much lower-paying IT or programming jobs in the region, and been consistently rejected. These jobs wouldn't pay enough to afford a long commute or maintain any financial security at all. In fact, I think we'd still be eligible for food stamps (SNAP) and my wife and kids would probably still be eligible for Medicaid. Their only saving grace is that they would pay the mortgage. Some of them might provide health insurance, at least for me. But I've seen nothing but a string of form rejections for these positions.</p>
<p>Grace and I don't get much quiet time -- we haven't had an actual date night, or an evening without the kids, since March. The closest we come is getting a sitter to watch the kids for a couple of hours while we run some errands. That's what we did last Sunday. I made a recording and turned it into a podcast. You can <a href="http://generalpurposepodcast.blogspot.com/2013/07/the-grace-and-paul-pottscast.html">listen</a> if you are interested.</p>Paul R. Pottshttp://www.blogger.com/profile/04401509483200614806noreply@blogger.com2tag:blogger.com,1999:blog-21054185.post-49693040331516328552013-07-24T13:09:00.001-04:002013-07-24T13:18:08.383-04:00Building a Podcast Feed File, for Beginners<p>I had a question about how to set up a podcast. I wrote this answer and thought while I was at it, I might as well polish up the answer just a bit and post it, in case it would be helpful to anyone else.</p>
<p><b>I'm starting a podcast and I need help creating an RSS feed. You're the only person I could think of that might know how to create such a thing. Is there any way you could help me?</b></p>
<p>OK, I am not an expert on podcasts in general because I've only every created mine. I set mine up by hand. I'll tell you how I do that and then you can try it that way if you want. You might prefer to use a web site that does the technical parts for you.</p>
<p>A podcast just consists of audio files that can be downloaded, and the feed file. I write my feed files by hand. I just have a hosting site at DreamHost that gives me FTP access, and I upload audio files to a directory that is under the root of one of my hosted web site directories. For example: <a href="http://thepottshouse.org/pottscasts/gpp/">http://thepottshouse.org/pottscasts/gpp/</a></p>
<p>The feed file I use, I write with a text editor. I use <a href="http://www.barebones.com/products/bbedit/index.html">BBEdit</a>, which is a fantastic text editor for the Macintosh that I've used for over 20 years, but any text editor will do. For the General Purpose Podcast, this is the feed file: <a href="http://thepottshouse.org/pottscasts/gpp/index.xml">http://thepottshouse.org/pottscasts/gpp/index.xml</a></p>
<p>The feed file contains information about the podcast feed as a whole, and then a series of entries, one for each episode (in my case, each audio file, although they don't strictly have to be audio files; you can use video files). When I add an audio file, I just add a new entry that describes the new audio file.</p>
<p>This is a slight simplification. I actually use a separate "staging" file for testing before I add entries to the main podcast feed. The staging file contains the last few episodes, and I have a separate subscription in iTunes to the "staging" podcast for testing purposes. When I upload a new episode MP3 file, I test it by adding an entry to the staging index file here: <a href="http://thepottshouse.org/pottscasts/gpp/index_staging.xml">http://thepottshouse.org/pottscasts/gpp/index_staging.xml</a></p>
<p>So I add an entry to test, and then tell iTunes to update the staging podcast. If it works OK and finds a new episode, downloads it, and it comes out to the right length, and the tags look OK, then I add the same entry to the main index file.</p>
<p>I have a blog for the podcast too. That's a separate thing on Blogger, here: <a href="http://generalpurposepodcast.blogspot.com">http://generalpurposepodcast.blogspot.com</a> That just provides a jumping-off point to get to the episodes, and something I can post on Facebook or Twitter. For each episode I just make a new blog post and write a description and then include a link to the particular MP3 file. The blog in the sidebar also has links to the feeds and to the iTunes store page for the podcast. I'll get to the iTunes store in a minute.</p>
<p>Oh, writing the entry in the feed file is kind of a pain. You have to specify a date, and it has to be formatted correctly and it has to have the right GMT offset which changes with daylight savings time. You have to specify the exact number of bytes in the file and the length in hours, minutes, and seconds. If you get these wrong the file will not be downloaded correctly -- it will be cut off. The URL needs to be URL-escaped, for example spaces become %20, etc.</p>
<p>If I upload the file to my hosting site first, so that I can see the file in my web browser, and copy the link, it comes out URL-escaped for me, so that part is easy. I paste that link to the file into the feed file entry for the episode. The entry gets a link to the file, and then there is a also a UID (a unique ID for the episode). Personally, I use the same thing for both the UID and the link, but they can be different. The UID is how iTunes (or some other podcast reader) decides, when it reads your feed file, whether it has downloaded that file already, or whether it needs to download it again. So it's important to come up with a scheme for UIDs and then never change them, or anyone who subscribes to your podcast will probably either see errors or get duplicated files. In other words, even if I moved the podcast files to a different server, and the link needed to be changed, I would not change the UIDs of any of the existing entries.</p>
<p>Once you have your feed file, you can check it with the feed validator -- and you definitely should do this before giving it out in public or submitting it to the iTunes store. See <a href="http://feedvalidator.org">http://feedvalidator.org</a> I try to remember to check mine every so often just to make sure I don't have an invalid date or something like that. If the feed is not working, this thing might tell you why.</p>
<p>OK, the next thing is iTunes integration. The thing to keep in mind here is that Apple does not host any of your files or your feed. You apply to be in the podcast directory, and then someone approves it, and the system generates a page for you on Apple's site. Once a day or so it reads your feed file and updates that page. The point here is that if someone is having problems with your page on iTunes, it is probably not Apple's fault, it is probably a problem with your feed or your hosted audio files.</p>
<p>If you don't want to do this all manually there are sites that will set up your feed for you automatically, like <a href="libsyn.com">libsyn.com</a> and <a href="podbean.com">podbean.com</a>. I am not sure which one is best and I have not used them.</p>
<p>This is Apple's guide that includes information on how to tag your files in the feed -- you could start out with mine as an example, but this is the de facto standard for writing a podcast feed that will work with iTunes and the iTunes store: <a href="http://www.apple.com/itunes/podcasts/specs.html">http://www.apple.com/itunes/podcasts/specs.html</a></p>
<p>OK, now you know just about everything I know about it. Oh, there is one more thing to talk about. This part is kind of critical.</p>
<p>So you create an audio file -- I make a WAV file and then encode it into an MP3 file either in Logic or in iTunes. My recent spoken word files are encoded at 128 Kbps; if I'm including music I would use a higher bit rate. Some people compress them much smaller, but I am a sticker about audio quality and 128 Kbps is about as much compression as I can tolerate.</p>
<p>You then have to tag it. This actually changes data fields in your MP3 file. The tagging should be consistent. You can see how my files look in iTunes. If the tagging is not consistent then the files will not sort properly -- they won't group into albums or sort by artist and that is a huge pain. When files get scattered all over your iTunes library, it looks very unprofessional and I tend to delete those podcasts. But note that the tags you add are not quite as relevant as they would be if you were releasing an album of MP3 files, and here's why -- podcasts have additional tags that are added by your "podcatcher" -- iTunes, or some other program that downloads the podcast files.</p>
<p>So you tag your MP3 file, and take note of the length (the exact length in bytes and the length in hours, minutes, and seconds), so that you can make a correct entry in your feed file. The MP3 file is the file you upload, but note that this file is not actually a podcast file yet. It doesn't show up in "Podcasts" under iTunes. It becomes a podcast file when iTunes or some other podcatcher <i>downloads</i> it. iTunes reads the metadata from the feed file (<i>metadata</i> is data about a file that is not in the file itself) and it uses parts of that metadata, like the podcast name, to adds <i>hidden</i> tags to the MP3 file. Yes, it changes the file -- the MP3 file on your hard drive that is downloaded will not be exactly the same file you put on the server. This is confusing. But it explains why if you download the MP3 file directly and put it in your iTunes library, rather than letting iTunes download it as a podcast episode, it won't "sort" -- that is, it won't show up as an iTunes podcast under the podcast name.</p>
<p>At least, that has been true in the past. I think recent versions of iTunes have finally made it so there is an "advanced" panel that will let you tell iTunes that a file is a podcast file, but sorting it into the proper podcast this way might still be tricky. So the key thing is that you might want to keep <i>both</i> your properly tagged source files, because those are the ones you would upload to your site if, for example, your site lost all its files, or if you were going to relocate your site to a new web server, <i>and also</i> the files after they have been downloaded and tagged as podcasts by iTunes. I keep them separately. If someone is missing an episode I can send them the podcast tagged file and they can add it to their iTunes library and it will sort correctly with the other podcast files.</p>
<p>OK, now you pretty much know everything I know about podcast feeds. I prefer doing it by hand because I'm a control freak -- I like to know exactly what is happening. I like to tag my files exactly the way I want. But if you're not into that -- if you don't know how to upload and download files of various kinds and tag MP3 files, for example -- you probably want to use something like Libsyn. Or maybe you know what to do but just want to save time. I just know that I've sometimes been called on to help people using these services fix their feeds after they are broken, or they need to relocate files, and it isn't pretty, so I'll stick to my hand-rolled feed.</p>Paul R. Pottshttp://www.blogger.com/profile/04401509483200614806noreply@blogger.com0tag:blogger.com,1999:blog-21054185.post-22850748126484109972013-07-08T00:36:00.002-04:002013-07-08T01:03:48.533-04:00Building Repast HPC on Mountain Lion<p>For a possible small consulting project, I've built Repast HPC on Mountain Lion and I'm making notes available here, since the build was not simple.</p>
<p>First, I needed the hdf5 library. I used hdf5-1.8.11 from the .tar.gz. This has to be built using ./configure --prefix=/usr/local/ (or somewhere else if you are doing something different to manage user-built programs). I was then able to run sudo make, sudo make check, sudo make install, and sudo make check-install and that all seemed to work fine (although the tests take quite a while, even on my 8-core Mac Pro).</p>
<p>Next, I needed to install netcdf. I went down a versioning rabbit hole for a number of hours with 4.3.0... I was _not_ able to get it to work! Use 4.2.1.1. ./configure --prefix=/usr/local, make, make check, sudo make install.</p>
<p>Next, the netcdf-cxx, the C++ version. I used netcdf-cxx-4.2 -- NOT netcdf-cxx4-4.2 -- with ./configure --prefix=/usr/local/</p>
<p>Similarly, boost 1.54 had all kinds of problems. I had to use boost 1.48. ./bootstrap.sh --prefix=/usr/local and sudo ./b2 ... the build process is extremely time consuming, and I had to manually install both the boost headers and the compiled libraries.</p>
<p>Next, openmp1 1.6.0 -- NOT 1.6.5. ./configure --prefix=/usr/local/ seemed to go OK, although it seems to run recursively on sub-projects, so it takes a long time, and creates hundreds of makefiles. Wow. Then sudo make install... so much stuff. My 8 cores are not really that much help, and don't seem to be busy enough. Maybe an SSD would help keep them stuffed. Where's my 2013 Mac Pro "space heater" edition, with a terabyte SSD? (Maybe when I get some income again...)</p>
<p>Finally, ./configure --prefix=/usr/local/ in repasthps-1.0.1, and make succeeded. After about 4 hours of messing around with broken builds. I had a lot with build issues for individual components and final problems with Repast HPC itself despite everything else building successfully, before I finally found this e-mail message chain that had some details about the API changes between different versions, and laid out a workable set of libraries:</p>
<a href="http://repast.10935.n7.nabble.com/Installing-RepastHPC-on-Mac-Can-I-Install-Prerequisite-Libraries-with-MacPort-td8293.html">http://repast.10935.n7.nabble.com/Installing-RepastHPC-on-Mac-Can-I-Install-Prerequisite-Libraries-with-MacPort-td8293.html</a>
<p>They suggest that these versions work:</p>
<pre>drwxr-xr-x@ 27 markehlen staff 918 Aug 21 19:14 boost_1_48_0
drwxr-xr-x@ 54 markehlen staff 1836 Aug 21 19:19 netcdf-4.2.1.1
drwxr-xr-x@ 26 markehlen staff 884 Aug 21 19:20 netcdf-cxx-4.2
drwxr-xr-x@ 30 markehlen staff 1020 Aug 21 19:04 openmpi-1.6
drwxr-xr-x@ 31 markehlen staff 1054 Aug 21 19:28 repasthpc-1.0.1</pre>
<p>And that combination did seem to work for me. I was able to run the samples (after changing some directory permissions) with:</p>
<pre>mpirun -np 4 ./zombie_model config.props model.props
mpirun -np 6 ./rumor_model config.props model.props</pre>
<p>---</p>
<p>Notes on building Boost 1.54: doing a full build yielded some failures, with those megabyte-long C++ template error messages. I had to build individual libraries. The build process doesn't seem to honor the prefix and won't install libraries anywhere but a stage directory in the source tree. I had to manually copy files from stage/lib into /user/local/lib and manually copy the boost headers. There is an issue with building mpi, too:</p>
<pre>./bootstrap.sh --prefix=/usr/local/ --with-libraries=mpi --show-libraries
sudo ./b2</pre>
<p>only works properly if I first put a user-config.jam file in my home directory containing "using mpi ;" Then I have to manually copy the boost mpi library.</p>
<p>Notes on bilding netcdf-cxx4-4.2: I had to use sudo make and sudo make install since it seems to write build products into /usr/local/ even before doing make install (!)</p>Paul R. Pottshttp://www.blogger.com/profile/04401509483200614806noreply@blogger.com2tag:blogger.com,1999:blog-21054185.post-77477880616923703112013-07-07T18:18:00.000-04:002013-07-07T19:35:36.902-04:00Are You Experienced?<p>A recruiter recently asked me to answer some questions for a client, so I did. I thought it might be worthwhile to save the questions and my answers and make them public so that I can refer people to them.</p>
<p><b>How many years of C++ experience do you have & have you worked with C++ during the last 2 years?</b></p>
<p>I've been using both C and C++ since before the C89/C90 standard and well before the C++98 standard. I taught myself C programming in college -- I did not learn it in a class. I initially used C++ when there were various sets of object-oriented extensions to C like THINK C and <a href="http://en.wikipedia.org/wiki/Microsoft-specific_exception_handling_mechanisms">Microsoft's "structured exception handling"</a> for Windows NT 3.5.</p>
<p>It's hard to provide an exact "number of years. At some positions I worked more in plain old C, or Java, or NewtonScript or other languages, but even in those jobs there were often times where I was working with small C and/or C++ projects on the side.</p>
<p>I own copies of the ISO standards for C and C++ (C90, C99, and C++03) and used to study them for my own edification, so that I could write more portable code. I used to subscribe to the C++ Report magazine. I used to write C and C++ interview test questions for screening people at the University of Michigan. I own dozens of books on C++ and have studied them extensively. I was definitely a C++ expert, although I was much more of an expert on C++03 than C++11. I am not so interested in the "cutting edge" of C++ these days (see below for notes about STL and C++11/C++0x). For example, here's a blog post I wrote about the C++ feature "pointers to member functions," in 2006:</p>
<p><a href="http://praisecurseandrecurse.blogspot.com/2006/08/generic-functions-and-pointers-to.html">http://praisecurseandrecurse.blogspot.com/2006/08/generic-functions-and-pointers-to.html</a></p>
<p>I have used the following compilers and frameworks for paid work (off the top of my head, these are the major tools I've used, and I am probably forgetting some):</p>
<p>THINK C / Think Class Library</p>
<p>MPW C/C++</p>
<p>Borland C++ with the Object Windows Library and TurboVision for MS-DOS</p>
<p>Microsoft Visual C++ starting with 1.0 / MFC</p>
<p>CodeWarrior / PowerPlant class library and Qt</p>
<p>XCode (aka ProjectBuilder) / CoreAudio</p>
<p>GCC ("g++") / Lectronix AFrame library</p>
<p>TI Code Composer Studio</p>
<p>In addition, I have some experience with static checkers (Lint, Understand for C/C++, QAC, etc. -- more are mentioned on my resume.) and I would say they are a must for large commercial code bases. Also, I have worked with profilers, various run-time debuggers, and tools such as valgrind -- these are incredibly useful and helpful in finding bugs, especially in the use of uninitialized memory.</p>
<p>So, how do you put that in an exact number? I'd say I've used C++ daily for perhaps 12 years, but even when I was not using C++ as my primary development language for a given job or set of projects, I used it at least a little bit every year for the last 24 years. So somewhere in between those numbers.</p>
<p><b>In the Last Two Years</b></p>
<p>Yes, the most recent project was a server for Lectronix, something called the PTT Server, that sits on top of the AFrame framework and receives requests via JSON-RPC, and manages the state of all the discrete IO in the system. It is a multi-threaded application using message queues and hierarchical state machines. The server is not very big, maybe 7,500 lines of code, and the top layer of it is actually generated by some internal Python scripts. During this period, I was also maintaining and adding new features to several other servers and drivers as needed.</p>
<p>If the client wants to know whether I am familiar with C++11/C++0x, the answer is "not very much." I have not studied the C++ 11 changes very much yet, so I am only slightly familiar with features like enum classes and lambdas. At Lectronix, we chose not to try to adopt new features for an existing multi-million line code base, and we stuck with slightly older, well-tested versions of our compilers. I have definitely used STL, but we do not use it heavily in embedded projects, because of a conservative attitude towards memory use and hidden costs. We also tend to avoid things like <dynamic_cast> and multiple inheritance in embedded programming, although I have used these features in the past. We tend to deliberately use a conservative subset of C++.</p>
<p>While I consider myself an expert on C++, it is not the be-all, end-all of programming languages. Learning other languages has made me a much better programmer and able to see problems from different perspectives. For example, I have on several occasions prototyped designs in other languages, for example Dylan or Haskell, to refine a _design_, and then ported the design to C++ to produce the shipping product.</p>
<p>I believe the industry is gradually moving towards functional programing, and languages such as Scala (that runs on the JVM) or Haskell (which can now generate code for ARM on "bare metal"), and embeddable scripting languages on top of C/C++ for configurability (for example, Lua on top of a back end written in C++ is the structure of most high-performance commercial video games). Even sticking with plain C++, there is no denying that Clang/LLVM are very promising developments -- Clang has the best error-checking and static analysis I've seen so far for C++, and for Objective-C this static analysis has allowed a feature called ARC -- automatic reference counting, which is basically garbage collection without having a separate background task that can create "pauses."</p>
<p>I have a strong interest in figuring out how using tools such as these, to make a business more competitive, specifically reducing line counts and bug counts and improving time to market. If the client is not interested in any of that, I'm probably not actually going to be the best fit for them, since they will not be making maximum use of my skills. I see myself as a full-stack software developer who should be able to choose the best tools for the job, not strictly as a C++ programmer.</p>
<p><b>What recent steps have you taken to improve your skills as a software developer?</b></p>
<p>Recently while still at Lectronix I was asked to re-implement some logic for handling the "PTT" (push to talk) signals for police radios, microphones, and hand controllers. My supervisor wanted me to use a library for handling HSM (hierarchical state machine) designs. I had never worked with hierarchical state machines, just flat state machines, so this was a little bit of a challenge. My first drafts of the state machines were not very good, but I arranged to meet with some other developers in my team who had more experience with HSM designs and improve them. After a couple of revisions I got the state machines up and running, they were simpler than the original design, they passed all my testing and all the bench testing, and we shipped them in a prototype for a police motorcycle product. So I now feel that I understand the basics of using hierarchical state machines as a design tool.</p>
<p>While unemployed, and although the job search itself takes up a great deal of time, I have been working on teaching myself Objective-C programming, something I've wanted to learn for a long time, and the basics of Apple's iOS framework for developing iPhone and iPad applications. My goal is to get a simple game up and running and available in the app store as a small demonstration project to show potential employers. Even if the first version is not sophisticated it should prove that I can build on those skills. I am doing this work "in public" -- sharing the code on github and writing about the design experiments and trade-offs on my blog. Here is one of my blog posts about the Objective-C implementation of the game:</p>
<p><a href="http://praisecurseandrecurse.blogspot.com/2013/06/objective-c-day-5.html">http://praisecurseandrecurse.blogspot.com/2013/06/objective-c-day-5.html</a></p>
<p>The latest code is available on GitHub here: <a href="https://github.com/paulrpotts/arctic-slide-ios">https://github.com/paulrpotts/arctic-slide-ios</a></p>
<p>I am also attempting to master some of the slightly more advanced features of the Haskell programming language, and functional programming in general, on the grounds that I believe that properly using functional languages such as F#, Scala, and Haskell can provide a competitive advantage, and give me the chance to bring that advantage to an employer.</p>
<p><b>Describe any experience you have in developing desktop applications.</b></p>
<p>Just to expand on some items from my resume and some that aren't:</p>
<p>In college I worked with a mathematics faculty member to develop an instructional multimedia tool using HyperCard and XCMD plug-ins that I wrote in THINK C for teaching calculus. I developed various other small applications for "classic" MacOS too, including a tool to edit version resources and a startup extension.</p>
<p>At the University of Michigan's Office of Instructional Technology, I built several instructional multimedia applications for students -- based around HyperCard stacks with custom XCMD and XFCN plug-ins written in C, Toolbook programs that integrated content from videodiscs, and a Visual BASIC application that used digital video to teach manufacturing process re-engineering techniques to business school students.</p>
<p>As a side project, I used Borland C++ and the TurboVision for MS-DOS framework, and also Visual BASIC for MS-DOS, to develop a survey application "front end" (to collect data at remote sites) and "back end" (to read the data from discs and aggregate it and display statistics) for the National Science Teachers Association (NSTA).</p>
<p>At Fry Multimedia, I built a prototype, with one other developer, in C++ using the MFC framework, of a Windows application to search a large CD-ROM database of compressed business data called "Calling All Business." This featured a "word wheel" feature that would match entries in the database and display matches while the user typed in search strings.</p>
<p>At the University of Michigan Medical Center I wrote, among other things, a Newton application that administered surveys to people in the ER waiting rooms. This was not desktop so much as "palmtop" but the same emphasis on user-centered design was there. I also either wrote entirely, or collaborated with another developer on, several internal applications, such as a Macintosh application written in C++ to upload data from the Newton devices, an application written using Metrowerks CodeWarrior (in C++ using the PowerPlant framework) to use AppleEvents to control Quark XPress in order to print batches of customized newsletters while providing text-to-speech feedback.</p>
<p>At Aardvark Computer Systems I completely rewrote the GUI application for controlling the company's flagship sound card product, the Aardvark Direct Pro Q10. This featured a mixer interface with knobs and sliders and animated meters to display audio input and output levels on all channels, persistent storage of mixer settings, and was built using C++ and the Qt framework. I also ran the beta-test program for this software.</p>
<p>At Lectronix, my work did not focus on desktop applications but I was able to occasionally contribute code to the Infotainment system GUIs, written in C++ using the Qt framework.</p>
<p><b>Describe any experience you have in developing server-side applications.</b></p>
<p>The bulk of my work at InterConnect was revisions to a server application written in Java that ran on Sun machines, and parsed "page collections" (bundles of scanned page images) along with metadata, a combination of XML including Library of Congress subject heading data and MARC records, to populate Oracle databases. These were large collections (terabytes, and that the time that was an unusually large amount of data to put into a web application). The data was housed on the client's EMC storage RAID arrays (at the time, very high-end systems). A full run of the program to populate a "page collection" would take several days. I worked with the client's Oracle team to debug issues with their database, particularly stored procedures written in PL/SQL, and with their production team to try to determine the best strategies for data issues; I wrote code to "clean" this data). The client was ProQuest (formerly University Microfilms International and Bell and Howell Information and Learning), and I worked specifically on the back-end for the Gerritsen Women's History collection and Genealogy and Family History collection. When InterConnect handed over development to ProQuest's internal team I wrote documentation on the import process and gave a presentation to explain it to their team.</p>
<p>Much of my work at Lectronix was also server-side applications, in the sense that all the code on products like the Rockwell iForce system was divided into drivers, servers, clients, and GUI code. Servers interact with clients and other servers using a network sockets interface wrapped in the Lecronix proprietary framework. So, for example, the Audio Zone Manager (AZM) server receives all requests as remote procedure calls and handles multiple clients. For some complex tasks like "priority audio" text-to-speech prompts it sets up a session where a client requests a session, the AZM lowers the level of "foreground" audio such as FM radio, the requesting client is granted a token, and then must make "keepalive" messages using the token, in order to keep the priority audio active. Multiple clients can request priority audio using different priority levels and the AZM must be able to handle requests that are "immediate" (only valid now) or requests which can be deferred, queue up these requests, and manage termination of expired priority audio sessions if a client process fails to send "keepalive" messages.</p>
<p>The more recent PTT server has a similar, multi-threaded design, where multiple instances of hierarchical state machines are fed messages via a serializing message queue, and there were various APIs to drivers that the code called, some that returned immediately, and some which blocked and returned only when a new state was available from the DSP (for example).</p>
<p>These are two examples; depending on what is meant, some other things I've worked on might qualify. For example, applications that support AppleEvents, wait for serial data, or run as "daemons" handling audio transfer between a driver and user-space applications on MacOS X, or run as interrupt-driven tasks to mix audio data.</p>
<p><b>Describe any experience you have in developing web applications.</b></p>
<p>I am not familiar with Microsoft web frameworks like ASP.NET so if this employer is looking for someone who would "hit the ground running" to develop web applications using that framework, I'm not that guy. I would be willing to learn, though, and I think I have a track record that indicates that I could.</p>
<p>I am not a database guru -- I have worked with databases and solved problems with databases, and I can understand basic database queries and the basics of database design but I am not an expert on (for example) query optimization for SQL Server; again, that is something I'd have to spend time learning.</p>
<p>I have not recently developed web applications using stacks such as LAMP (Linux, Apache, MySQL, PHP) or Ruby on Rails. However, I did work on several early web applications using Perl CGI scripts and plain old HTML -- for example, at Fry Multimedia I developed the web site for the Association of American Publishers. I was a beta-tester for Java before the 1.0 release and wrote early experimental "applets."</p>
<p>At the University of Michigan I used Apple's WebObjects (Java, with the WebObjects framework) to port my design for the Apple Newton survey engine to the web.</p>
<p>Later, while at InterConnect, I did some work (fixing bugs) on InterConnect's Perl and Java framework for "front-end" web sites -- the engine that generated HTML from database queries and templates, and made fixes to the web applications themselves in Perl, although this work was not my primary focus (see above).</p>
<p>I can solve basic Apache configuration issues and I've done small projects to set up, for example, a personal Wiki written in Haskell on an Ubuntu server. For example, I had to do some problem-solving to get this functioning under Mountain Lion (MacOS X 10.8). I wrote recently about this here:</p>
<p><a href="http://geeklikemetoo.blogspot.com/2012/07/apple-breaks-my-gitit-wiki-under.html">http://geeklikemetoo.blogspot.com/2012/07/apple-breaks-my-gitit-wiki-under.html</a></p>
<p><b>What development project have you enjoyed the most? Why?</b></p>
<p>I have enjoyed a lot of them, but I'll give you one specific example. Back at the Office of Instructional Technology at the University of Michigan I worked with a faculty member in the School of Nursing to develop a program to teach nursing students about the side effects of antipsychotic medications. For this project we hired an actor to act out various horrible side effects, from drowsiness to shuffling to a seizure, and videotaped him. I enjoyed this project a lot because I got to collaborate with several people, including Dr. Reg Williams. I got to have creative input at all levels, from conception to final development -- for example, while developing the ToolBook application, I added animations to show how neurotransmitters move across synapses. I learned a number of new skills, including how to light a video shoot and the use of a non-linear video-editing equipment (Avid Composer), and I got to see the final system in use and receive positive feedback.</p>
<p>So I would say that what I liked most about that project was (1) being involved in the design and implementation at all stages, (2) the chance to work with some very collegial and welcoming people, and (3) being able to learn several new skills, and (4) being able to "close the loop" and see how the final product was received by the customers (in this case, faculty and students). I have since then worked in many development situations where different parts of that process were lacking or non-existent and they often make projects less enjoyable.</p>
<p>Here's a "runner-up." In 2000 I did some consulting work for Aardvark Computer Systems, assisting them with getting their flagship audio card working on Macintosh systems. Where the multimedia application I worked on several years earlier was high-level, this was very low-level: the issues involved included data representation ("big-endian" versus "little-endian"), and the low-level behavior of the PCI bus (allowed "shot size" and retry logic). Debugging this involved working closely with an electrical engineer who set up wires on the board and connected them to logic analyzer probes, and configuring GPIO pins so that we could toggle them to signal where in the DSP code we were executing. This was tedious and fragile -- even while wearing a wrist strap, moving around the office near the desk could produce enough static electricity to cause the computer to reboot. The solution eventually required a very careful re-implementation of the PCI data transfer code using a combination of C and inline Motorola 56301 DSP assembly language. I had to pull out a lot of very low-level tricks here, paying close attention to the run-time of individual assembly instructions from the data sheet, borrowing bits from registers to count partially completed buffers, studying chip errata and documentation errors in the data sheet, and dealing with a compiler that did not properly support ISO standard C and would let you know this by, for example, crashing when it saw a variable declared <b>const</b>. This also was very enjoyable for the chance to work very close to the hardware, learning new things, solving a difficult problem, working in close collaboration with a talented engineer, and getting the chance to actually ship the solution we came up with. In retrospect we forget the tedium and bleary eyes and remember the success.</p>Paul R. Pottshttp://www.blogger.com/profile/04401509483200614806noreply@blogger.com0tag:blogger.com,1999:blog-21054185.post-19769860939105211622013-07-07T17:05:00.002-04:002013-07-07T17:15:02.484-04:00The Situation (Day 118)<p>So. Day 118 of unemployment. Almost four months. It's getting hard to stay positive and keep anxiety at bay. Here's what's going on.</p>
<p>It might sound hopelessly naive, but I didn't think it would be this hard to find another job. I know I've been quite fortunate in some ways with respect to my career -- being very into, and good at, computer programming through the '90s and 2000s was a good place to be. I've been out of work, briefly, a few times before, when the small business or startups I worked for shrunk or imploded. but I've never had much difficulty finding my next job, and the job changes have generally been "upgrades" to higher pay, or at least bigger projects and more responsibility.</p>
<p>The job market is certainly bad right now, and especially bad locally. I am trying to both be realistic and optimistic at the same time -- realistically, it seems to be absolutely useless, for the most part, to apply for publicly-posted jobs. I've applied for dozens -- it would have been more, if there were more posted, but while there are a lot of job listings, it doesn't make any sense for me to apply for jobs that will not pay enough to cover our mortgage; if I got one, we'd still have to move. And we are still trying to figure out how to avoid that, so that we don't lose everything we've put into our house.</p>
<p>Working with recruiters has been an overwhelmingly negative experience as well, although there have been a few bright spots that have led to good leads and interviews. I'm really fed up with applying for a job listing for Saginaw or Flint only to find out that I'm actually contacting a recruiter about a position in Alabama or Mississippi or Florida. I've talked to recruiters at length who, it turned out, didn't even know the company they were recruiting for, because they were actually working for another recruiter. Is there even an actual job posting at the end of that chain of recruiters, or am I putting effort into some kind of scam? I don't know. I've also put a considerable amount of time interviewing for contract positions, making the case that I am a strong candidate, only to be completely low-balled on hourly rate, to the point where it would make no economic sense whatsoever for me to take that job (for example, a six-month C++ programming contract out of state, in Manhattan, for a major bank, where I'm expected to be pleased to accept $50 an hour and no per diem or travel expenses).</p>
<p>My wife suggests that in the market right now, it will basically be impossible to find a job without having a job, except through personal contacts. That's discouraging, but she is probably right. And one difficulty is that I just don't have a lot of personal contacts in the area, since we've only been here three years. I have a few, and they've been trying to help me, but in general the leads (even with referrals from people who already work in the companies) have not yielded much that is promising -- usually a series of web forms where I upload a resume, then describe my work experience again in detail, write and upload a cover letter, fill out an elaborate series of questions -- this can and often does take two or three hours -- and then hear nothing whatsoever about the job again. For most of these, there is no person to contact -- no name, no phone number, no e-mail address. I'm faceless to the company, and they are faceless to me. That's just not a good prospect.</p>
<p>Still, I have a generalized feeling that the right thing will come along, at least for the short term. Essentially, I have to keep believing that. I keep feeling optimistic about particular jobs. But hearing nothing back over and over again for months is starting to wear me down.</p>
<p>The money situation is getting to be difficult. We still have a small positive bank balance, and I've been able to continue to pay for everything we need. Fortunately our consumer debt is very low -- far lower than a lot of American families like ours. But our savings is gone, so from here on out it's either income or selling off things. We are eligible for cash assistance from the state as well, to cover things like diapers -- we will look into that this week.</p>
<p>Unemployment continues to cover our mortgage and taxes, and food benefits are doing quite a good job at covering our food needs. But tomorrow, I will certify for my 17th week of unemployment. I have either 3 or 6 weeks left to collect out of Michigan's maximum of 20. I'm not sure, because the state refused to pay me for 3 weeks, and I'm not sure if those weeks are just lost or if I am still eligible to collect them. I'm hoping so. We've got a situation with some other bills -- things like energy, water, life insurance, and things not covered by our food benefits. Fortunately energy bills were low during the spring and early summer, but we've had to turn on the air conditioning. We use window ventilation fans strategically. We have the AC on, set low, and the furnace fan running continually, and a separate portable AC unit here in my office where it gets too hot to run computers without it. Our next bill is going to be high. I'm guessing the next bill will be $600. I may be able to get that reduced by starting up a budget plan with Consumers Energy.</p>
<p>The dehumidifier in the basement has stopped working, which is a potential mold problem with our things stored down there. We've got to go through some of the remaining boxes. The older of our two furnaces seems to be out of commission again. I had plans to put money into our insulation and HVAC this summer, as well as exterior repainting and a whole lot of minor repair items, but that doesn't really work without income. Fortunately the roof is holding up, as is the new set of gutters we put on last fall.</p>
<p>Oh -- the lead situation. We had the lead inspection -- a very thorough, all-day inspection of our home, inside and outside, and grounds. The inspector did not find anything really hazardous -- there is some old lead paint in baseboards and woodwork in a couple of rooms, but it is under layers of later paint, and it is not peeling. Our dining table, which was an old ping-pong table, had lead paint on it. It's now gone, as are all the kitchen towels we used to use to wipe it. That was about two months ago, and we have still not gotten the written report which is supposed to include the results of the soil samples. Another follow-up phone call is needed. In another month or so we will be getting the children's blood levels tested again, so we should then have a read on whether there is still any kind of daily exposure going on.</p>
<p>Even though we have been making payments via COBRA to continue our dental coverage, several dental bills have been refused, and so we have to straighten that out. Basically, as I'm sure you are aware, health insurance companies are slimy pig-fuckers, and I don't mean that in an affectionate way. We've got some big residual bills -- our four-year-old's dental work cost a lot, even after insurance. We are trying to get the refused bills re-submitted and get whatever issue there is with our dental coverage straightened out. I'm very concerned that something is going to hit my credit rating. So far, we have not failed to meet any of our obligations, but one of these medical providers could decide to sell a debt to a collection agency at any time and that will be a black mark against us. I'm going to pull my reports and try to make sure that hasn't already happened "silently" when an insurance payment was improperly refused.</p>
<p>I've raised a little cash by selling off some of my home studio gear. The Apogee Ensemble audio interface I've used for the last five years to record songs and podcasts is gone. I've sold a number of my guitars, including my Steinberger fretless bass and my Adamas 12-string acoustic guitar. There isn't much left to sell that is worth the effort -- for example, I could only get $75 for a made-in-Japan Jagmaster that I paid $400 for. No one wants to buy a 20" Sony tube TV from 1994 or an electric guitar that needs rewiring work before it can be played. I could start selling our library -- I've had to gut my book collection in the past. I'm really resisting this, though, in part because the return compared to the time and effort put in to do it would be very low -- there are no decent local used book stores that might send a buyer out, so I'd be carting boxes of books down to Ann Arbor -- and in part because I just don't think I can bear giving up a collection of books I've gradually built and shaped and cultivated over the years, in some cases books I've carried around with me since I was a child. Grace and I will do a pass through our possessions trying to find some things that might be easy to turn into cash, but in general we have always lived a very bohemian lifestyle -- furniture from the Goodwill, silverware collected from rummage sales, bookshelves from Ikea; my desk is a door on plastic sawhorses. I'm not going to sell my computers; that would be eating our "seed corn."</p>
<p>I've been reading the recent novels by Alex Hughes about the adventures of an ex-junkie telepath. In them, he has to meet with his Narcotics Anonymous buddy, who asks him each time to list three things he's grateful for. I'm grateful for many things. I'm grateful that there is a safety net, even if unemployment may not last until I get my first paycheck from my next job. I'm grateful for the SNAP program and the WIC program that are pretty much supplying us with all the food we need to feed the family. I'm grateful for our small but supportive group of local friends, and our out-of-town friends. I'm grateful for the anonymous donation of a Meijer gift card sent by a friend of the family. I'm grateful for the handful of decent recruiters who are actually trying to hook me up with a real job for both our benefits. And I'm grateful to my family and especially to my wife who has been very patient with me as I work through this process and all the frustration and anxiety that comes with it. And I'm grateful to you, my online friends, who have been supportive as well.</p>Paul R. Pottshttp://www.blogger.com/profile/04401509483200614806noreply@blogger.com0tag:blogger.com,1999:blog-21054185.post-5878260584322759752013-06-18T14:09:00.001-04:002013-06-18T14:09:15.482-04:00The Situation (On Investing in a Revitalized Career)<p>When I found myself unemployed, one of my first thoughts was that it would be a good opportunity to invest some R&D in my career. I had a plan to put in some serious time learning some new skills. I ordered some books on Scala, Objective C, iOS programming, digital filters, and a few other topics I wanted to study. I considered taking an iOS "boot camp" with Big Nerd Ranch -- it looked like a good class, but it just plain cost too much. I planned to work through a couple of books. I got in a couple of days of work and made some progress, but have come to realize that this was just a bit unrealistic.</p>
<p>In part, it's unrealistic because of the time required to manage benefits, as well as the job-search reporting requirements in which I have to log specific jobs applied for each week (only recently added, apparently). There's no option to say "I'm teaching myself some new skills so I can apply for better jobs." It hasn't helped that we've had a couple of other difficulties piled on too -- we're still waiting on the lead testing, now scheduled for this coming week. There was a heap of work to help my teenager finish some college application essays. There was some other family drama. In fact I had arranged to go stay with some friends in Ann Arbor for a week specifically to get away from the distractions here, and work towards a demo-able iOS app. When things blew up, I had to cancel that idea (although I did wind up doing it later).</p>
<p>I came across something else that I'd really like to do (although I missed this one). There's an organization that teaches two- or four-day intensive courses in Haskell programming. The last one was in the San Francisco Bay area. There is no guarantee at all that if I took the class, and met the folks there, doing the classic networking thing, it would necessarily help me get a better job. I'd really, really like to take the class anyway. I'm not asking for donations to go to a training class like that right now, as such -- I'm not sure it is quite the right time. I'm mostly writing this down by way of just putting my intention out there in some kind of concrete form.</p>
<p>I've been diddling around with Haskell for a number of years now. I've written about Haskell a few times on. I've used it "in anger" -- to solve a real work-related problem -- a few times, for creating small utility programs, usually to chew through some data files, to prototype an algorithm that I later wrote in C++, or to generate some audio data for testing. It is, hands-down, my favorite programming language, a language that expands my mind every time I use it, and has taught me some entirely new ways to think about writing code, applicable to any language. I won't claim that Haskell is, <i>per se</i>, the great savior of programming. GHC can be awkward, and produces truly obscure error messages. It can be hard to debug and optimize. However, it seems to have some staying power, and perhaps more importantly, it is a huge influence on recent programming language designs.</p>
<p>Haskell didn't appear in a vacuum -- it certainly has absorbed strong influences from the Lisp family of languages, and from ML, and maybe other languages like Clean, and others even more obscure. I love learning new programming languages, and I've learned new ideas from just about every language I've learned, but Haskell seems unique in the sheer <i>density</i> of its ability to blow your mind. Despite the fact that it is perhaps not practical for every application, I've become convinced that many of the paradigms and concepts behind Haskell really are the future of programming -- specifically, the competitive advantage, even something close to the ever-receding goal of a "silver bullet" for programming.</p>
<p>I'm really encouraged by the emergence of CUFP (Commercial Users of Functional Programming) and work that some companies like Galois and Well Typed are doing. I believe it is already practical to write complex embedded systems with real-time and space constraints in Haskell, or at least partially in Haskell. It looks like a few pioneers are already doing it. The expressiveness of the language, and the resistance to many kinds of common errors that the language design essentially gives you "for free" could be a big competitive advantage in embedded software designs.</p>
<p>I'm not sure if, at this stage, there are sufficient opportunities to join companies that are also interested in R&D along these lines, especially given that I don't have a Ph.D. and am not likely to acquire one in the near future. Certainly few people nearby seem to be doing this kind of work, and I'm not certain whether there might be an opportunity to join an existing consultancy as a remote employee. I might have to strike out on my own. Grace and I have also been talking about setting me up as an LLC, as opposed to just doing hourly work via W-2s. In fact, honestly, despite the fact that I don't think of myself as much of an entrepreneur, doing so may be the best long-term solution to the thorny question of how to get any significant "upgrade" to my career, in terms of both money and the challenge of doing new and meaningful work. But that's a big leap to make.</p>
<p>I realized that I have been thinking and occasionally talking to friends about the idea of forming a company to do R&D and consulting on using advanced languages for embedded programming for ten years now, or maybe even slightly longer. I didn't even know Well-Typed offered classes like this, until I stumbled across the description online, but I have confidence that there will be more classes in the future. It seems like there is a window of opportunity. I didn't manage to get into iOS development at the start, like I did with Newton development, and I regret that (although, on the plus side, stuff works now!). It's hard juggling a career and a family. But I don't think it's too late to become a Haskell guru, for some value of "guru," and feel that maybe programming isn't entirely devoid of innovation after all. And maybe even enjoy programming again!</p>Paul R. Pottshttp://www.blogger.com/profile/04401509483200614806noreply@blogger.com3tag:blogger.com,1999:blog-21054185.post-62583666278615992632013-06-17T13:18:00.002-04:002013-06-18T11:12:37.956-04:00The Situation (Post-Father's Day)<p>I had a great weekend.</p>
<p>These posts have been largely kind of gloomy -- maybe understandably, given my ongoing unemployment. But I had a great weekend.</p>
<p>On Friday afternoon I had a phone interview that went, I thought, pretty well. Grace had taken the kids away with her to Ann Arbor where she had an obstetric appointment, and then stayed overnight with them with extended family, even taking them to a kind of barbecue/fishing party that sounded like a blast. On Saturday morning she picked up a CSA share that belonged to a friend, who was out of town and donated it to us. She got back Saturday afternoon. Our fridge is packed with fantastic produce. More on that in a bit.<p>
<p>I spent most of that time working on a <a href="http://praisecurseandrecurse.blogspot.com">Dylan program</a>, an implementation of the little Macintosh Polar puzzle game from 20-plus years ago. When I took breaks from the screen I worked on a Gene Wolfe novel that has eluded me for a long time -- the second part of the Short Sun trilogy, <i>In Green's Jungles</i>. Wolfe is one of my very favorite writers and I still think that the <i>Book of the New Sun</i> series is pretty much the masterpiece of late-twentieth-century fantasy and science fiction. I think <i>The Shadow of the Torturer</i> is the only book I've literally worn to the point of disintegration just by reading it over and over.</p>
<p>But he's a puzzling writer, and in the later series he gets more puzzling. Reading <i>In Green's Jungles</i> is like looking through a kaleidoscope held by someone else. As soon as you start to figure out what you're looking at, and say "Ah! Yes, I think I see what is going on," he twists the kaleidoscope and says "how about now?" And it's all a jumble of pretty fragments again. And so these are books that are unsatisfying on a first reading, and even a second reading. I've gotten further this time; maybe I'll even finish the second book. Maybe by the third reading I will be able to plow through the third and final book and feel like I have a sense of what is really going on. They differ from <i>The Book of the New Sun</i> in that the former series <i>can</i> be read as a straightforward adventure story, and it is satisfying in that way -- to a certain extent. Until you realize that Severian's story doesn't entirely hold up, and that he is an unreliable narrator, and then you fall naturally into the mystery, and start to form your own theories. I have a monograph I'm working on, about The Book of the New Sun, but I don't feel it is quite ready for publication, even on my blog. I feel almost ready to write about the second series, the <i>Long Sun</i> books. The <i>Short Sun</i> books are still largely a blur of glittering fragments to me.</p>
<p>I'm digressing again... back to my weekend. The time with my wife and family out of town. That was a great chance to dive back in, just a little bit, into one of my favorite programming languages, and one that was hugely formative to my thinking about programming. In 1994 or thereabouts I was an alpha-tester for Apple's Dylan development environment, a tool that was ultimately relegated to the status of a technology demo than a viable language. At the same time I was developing real solutions in NewtonScript, the language that Apple actually deployed in the Newton product line. Trying to understand Dylan led me to Scheme and eventually to Common Lisp and Haskell. Dylan still exists in the form of <a href="http://opendylan.org">community-supported implementations</a> -- see also the <a href="http://dylanfoundry.org">Dylan Foundry</a>.</p>
<p>Dylan is a fascinating language but as I study the original documents in 2013 -- Apple's book <i>The Dylan Reference Manual</i> and the original Dylan book describing the language with Lisp-like syntax -- I see an over-designed language, in the sense that the core language, designed to allow both dynamism and efficient compilation, seems to have too many features to really enable the sort of optimizations that the designers imagined. Maybe I'm just mistaking implementation failures for language design failures. Is there a thinner core language to be extracted from the big-language spec, if some features could be sacrificed? And would that be worth doing? Because I also see an extremely expressive language, a language I far prefer to Java, the other language emerging at the time, with some wonderful features, not the least of which is generic functions, which still seems like the natural way to construct object-oriented programs which are open to tinkering and extension.</p>
<p>Anyway, I got my program mostly working, and I'm talking to some of the remaining volunteer team about some remaining issues, so that's been fun. But I'm not writing today to talk about programming. I'm writing to talk about how grateful I am for my life and what my family and I are doing here in Saginaw.</p>
<p>Staying home in Saginaw for the phone interview Friday, I missed the travel and the company and the barbecue. But on Father's Day there was a friend in nearby Bay City who was moving his family -- a large family like mine. I thought helping their family would be a great way to spend Father's Day so I took my daughter and drove out there. It was a great afternoon -- there was food set up, a big U-Haul truck, and just enough guys volunteering. Veronica hung out with a gang of kids. The weather and the company were terrific. I helped load cabinets, dressers, a treadmill, helped take apart a picnic table -- all kinds of stuff. It was a reminder that doing work that requires me to exist only as a brain and a set of fingers is sometimes not gratifying, and that enjoying life is really often predicated on using the body, not just the brain. My back feels better than it has in months -- I worked it just hard enough to stretch everything out thoroughly and counteract some of the endless hours spent sitting at the computer looking at job postings. Today my back and arms and shoulders and wrists feel sore, but in a good way -- no stabbing pain or pinched-nerve sensations, just a pleasant ache of well-used muscles.</p>
<p>I wonder if that makes sense -- the idea that I would go spend most of my Father's Day helping someone else move, and honestly, I can't really say that it was entirely by way of trying to be virtuous or helpful. I feel like I got a lot out of it. It was fun. I'm really glad I went.</p>
<p>On the way back home I stopped at a bookstore, and indulged my habit. One of the books I picked up is a bit of fun trash (I say that admiringly). Alastair Reynolds is one of my favorite contemporary science fiction writers. He writes gloriously gothic and gritty space opera. He's now written a Doctor Who novel, a spin-off story set in the Jon Pertwee (<a href="http://en.wikipedia.org/wiki/Third_Doctor">Third Doctor</a>) era. I have not finished it but it is terrific so far. Somehow Reynolds, in print, manages to conjure up the low-budget location shoots, cliched supporting characters, awkward dialogue, excess foreshadowing, and cliff-hanger pacing of the old serials in a way that is both dead-on and affectionate.</p>
<p>But I was talking about greens... a kaleidoscopic jungle of greens, in our refrigerator, or something... oh, yeah. Sunday night is tossed salad and scrambled eggs night -- yes, inspired by the <a href="http://kenlevine.blogspot.com/2012/04/story-behind-tossed-salad-and-scrambled.html">closing song</a> from the old Frasier TV show. Do we really eat meals on a regular schedule? Well, more or less; Monday is always chili night, and I cook it. Tuesday is baked potato night, of some kind -- white potatoes or sweet potatoes, often topped with leftover chili -- you get the idea. Theme, but variations according to whatever is in the refrigerator. So Grace softened up some chopped garlic scapes and chives in butter, and threw in eight eggs, and some gorgonzola cheese, and fresh dill, and something else I probably don't remember -- and it was delicious. We had a big salad of mixed greens, fresh from local Michigan farms, at room temperature, tossed with a little leftover pasta salad rescued from her trip, and it was delicious.</p>
<p>And Grace couldn't eat any of it. Somehow between the pharmacy and her doctor's office and Medicaid they did not approve her enzyme prescription refill, and somehow sat on it for ten days, so that she didn't know it had not been approved until it too late to do anything about it for this weekend. She's now out of pills, and so can't eat food without experiencing waves of nausea. So she sipped weak tea and watched us eat. We will be trying to resolve that today, and spend a few hundred dollars we can't really afford to spend, if we have to, so my wife can eat food. <a href="http://generalpurposepodcast.blogspot.com/2011/09/gpp-067-gallbladder-season-part-1.html">It seems like it should be a simple thing</a>, but it isn't. And unfortunately this is not the first time she's had to go without her enzyme pills. We remain hopeful that someday she will be able to go off them entirely, but having to forcibly go off them doesn't help that.</p>
<p>But it was still a shockingly delicious dinner. Sometimes life just hits you across the face, in a good way.</p>
<p>For dessert she made a strawberry-rhubarb fool, with fruit picked up from a farm-stand north of Ann Arbor. The strawberries were so ripe that you would not have wanted to pick one up and eat it -- they were just starting to dissolve into pungent red liquid. That's just when they actually taste the best, of course. She just cooked down the strawberries and rhubarb, with a little honey, and I threw in a tablespoon or so of dried thyme. The result was indescribably delicious. We served it to the kids with a little half-and-half drizzled on top, which curdled from the acid -- so it was kind of an ugly dessert, but delicious. I think Grace got to eat some of that, without the half-and-half.</p>
<p>It seems like a simple thing -- working, socializing, eating. We're running short of money, I'm still applying for jobs every week, I'm waiting to hear back on dozens of them, I'm waiting to hear follow-up from a number of interviews. It all seems complicated and challenging and stressful. But I had a great weekend. I hope you did, too.</p>Paul R. Pottshttp://www.blogger.com/profile/04401509483200614806noreply@blogger.com2tag:blogger.com,1999:blog-21054185.post-86893696349947948712013-06-11T16:22:00.001-04:002013-06-11T19:26:41.155-04:00The Situation (Day 92)<p>So this is one of those days where everything is just "hovering." For the last few weeks I've had three or four recruiter phone calls and e-mails a day, but today I've had none. It's spooky, like the rest of the world was destroyed and I haven't gotten the news yet. Meanwhile, I've done some follow-up e-mails and messages, and gotten nothing back. Several different applications are in the post-interview stage, "hovering." I need to apply for some more local jobs, but I'm not seeing very many that are even remotely within the realm of possibility.</p>
<p>Just for fun I did a calculation on what it would take to do a daily commute from Saginaw to Bloomfield Hills. That's about 80 miles one way, taking approximately an hour and 20 minutes. I know people have commutes like this, and longer, but let's do the math as an exercise.</p>
<p>Our current main car is a late-model SUV that gets an average of 15.4 mpg. It probably will do a little better for an all-highway commute, but considering the possibility of heavy traffic and road construction, let's call it 16 mpg. For a 160-mile round trip commute, that's a convenient round number, ten gallons of gas a day. Gas today is about $4.20 a gallon. It will probably be lower off-season, but that's what it is today. That gives us $42.00 a day in gas, or $210 a week. Not accounting for vacation time, that's $10,920 just in gas. That doesn't cover wear and tear at all. The IRS standard mileage allowance including wear and tear and repairs for 2012 is 56.5 cents a mile; that works out in this case to $90.40 a day or (again, not taking vacation time into account) $23,504 a year -- in other words, that's what they consider the actual cost of owning and maintaining a vehicle and using it for that much travel.</p>
<p>Something like a Honda Fit would obviously be a better choice, at somewhere in the ballpark of 30 mpg, but note that this would add a car payment, when we don't have one now, and so the overall cost would not be dramatically lower.</p>
<p>Note that this takes into account no "externalities" at all. Here's one externality: if I was going to be gone with the car all day, every work day, my wife would need a second car in order to run any kind of local errand at all with the family. So we'd then be a two-car family instead of a one-car family. So it wouldn't be a matter of swapping out one car for a better-mileage car -- where selling the first could help pay for the second. Of course the at-home car wouldn't incur nearly as much in the way of gas expense and wear-and-tear, but it isn't trivial just to maintain a car, even one you don't drive very much. It also doesn't account at all for the emissions, and what that is doing to the climate, or the fact that I'd be driving for almost 3 hours a day, turning an 9-hour-day (with lunch) into a 12-hour day, and what that would do to me and my relationship with the family, and whether we'd be able to afford to hire someone to help replace some of my labor in and around our home (ranging from cooking and cleaning and mowing the lawn to child care).</p>
<p>So, alternatives. It would probably be cheaper to stay someplace much closer to a work situation in the metro Detroit area during the work week, and we're exploring that option. Relocation would be neither quick or easy. So what's the cost of an extended-stay hotel close to the area? The cheapest one I could find online in a brief search was about $55 a night. Assuming I stayed Monday through Thursday nights and left from work on Friday, that's $220 a week (and note that these are still a commute from the workplaces, just a much shorter one, and I'd still have one $42.00 round-trip commute). So it isn't significantly cheaper. I don't think I could make the food options as low-priced as they are at home. Exercising that option, I'd be doing a lot less driving, and that would be great, but I wouldn't see my family at all for four nights a week. I'm chewing over whether I could find that tolerable, and for how long. I don't really want to be an absentee father; these years aren't really fungible, to be "made up for" later.</p>
<p>A local apartment might be cheaper. I haven't looked into that. But it is a good reminder that if I'm going to consider an arrangement like this, I have to be sure to ask for enough money to actually make it viable. Ideally "viable" would translate to "at least what I was earning before, with cost of living adjustment, and enough extra to cover the cost of the distance." Of course this isn't an ideal world. How about "after taking the cost of the distance into account, doesn't actually represent a decline in income?" And we may have to accept "we can break even doing this" as opposed to "I'm working, but going further into debt with every mile of scenic I-75 I traverse!" And this is why I continue to press for a telecommuting, or at least part-week telecommuting, option. And why we might ultimately have to give up everything we've been working for here.</p>Paul R. Pottshttp://www.blogger.com/profile/04401509483200614806noreply@blogger.com1tag:blogger.com,1999:blog-21054185.post-79495575101355301482013-06-06T19:40:00.001-04:002013-06-06T19:40:53.817-04:00The Situation (Day 88)<p>I received word back (via paper mail) from the State of Michigan saying that my claim for 3 weeks of unemployment compensation, for the weeks ending April 27, May 4, and May 11 (see my earlier posts) is denied. The form I got back said I had it is found that "you did not report (certify) as directed and had no good cause for failing to report (certify)."</p>
<p>The cause I reported was that I missed certifying online by one business day because I was distracted by recruiters and interviews. In other words, because I was concentrating so much on searching for a suitable job. What would have been good cause, I wonder?</p>
<p>So, er, let this be a lesson to all you slackers!</p>
<p>It says I have the right to appeal in writing. Would there be any point to that, I wonder?</p>Paul R. Pottshttp://www.blogger.com/profile/04401509483200614806noreply@blogger.com1tag:blogger.com,1999:blog-21054185.post-31907554833630523652013-06-06T14:15:00.003-04:002013-06-11T14:48:03.764-04:00A Counterfeit Motorola Razr V3 Cell PhoneI have an old Motorola Razr V3. It's from (roughly) 2005 or 2006. I use it without a contract, with a T-Mobile SIM card, buying minutes when I need to. I like this phone design, and I don't really want a smart phone or even a dumb phone with a touch screen, but mine is falling apart. I bought two allegedly new-old stock Motorola Razr V3 phones from an eBay seller. Unfortunately, they are counterfeits.<br />
<br />
I have opened a case with eBay to return them, but I thought it might be useful to share pictures.
Honestly, I wouldn't have minded much if (1) they worked well (they don't -- the speaker for speakerphone mode doesn't work, they don't vibrate, and the audio is poor), and (2) they were really cheap (they weren't that cheap -- I paid $59.99 each).
<br />
<br />
Take a look at the pictures. The gray phone is the original. The gold one is the counterfeit. It's very obvious when you just pick them up, open them, and try to work the buttons or open the battery compartment. The old phone opens smoothly and still feels solid. The new one grinds slightly and feels loose and flimsy.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-X2pj0IqR_oU/UbDPdxvNtCI/AAAAAAAADB8/T2sxVq7uHsU/s1600/DSCF0320.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="239" src="http://4.bp.blogspot.com/-X2pj0IqR_oU/UbDPdxvNtCI/AAAAAAAADB8/T2sxVq7uHsU/s320/DSCF0320.jpg" width="320" /></a></div>
<div style="text-align: center;">
Original: fit and finish is very clean. "M" logo button top center matches phone.</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgHfYwk1rgxtfvtuYaeyrTXUl9PDrJHhTi5Tek_TN7F2yI3bJ-TqcE9bUX__v6tDmpX5dUiDdgGFf5MbNmtTOQkFTjU8HzDISyeMfHIpp5gHEvZrWQDPn8TctEI4WReA2N8HPV9Sw/s1600/DSCF0319.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="239" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgHfYwk1rgxtfvtuYaeyrTXUl9PDrJHhTi5Tek_TN7F2yI3bJ-TqcE9bUX__v6tDmpX5dUiDdgGFf5MbNmtTOQkFTjU8HzDISyeMfHIpp5gHEvZrWQDPn8TctEI4WReA2N8HPV9Sw/s320/DSCF0319.jpg" width="320" /></a></div>
<div style="text-align: center;">
Fake: front cover edge misaligned, "M" logo is blue and looks strange, buttons are loose.</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-I8dUvsoRwqg/UbDPeMcVBhI/AAAAAAAADCI/CT8X-1yiJ20/s1600/DSCF0329.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="239" src="http://1.bp.blogspot.com/-I8dUvsoRwqg/UbDPeMcVBhI/AAAAAAAADCI/CT8X-1yiJ20/s320/DSCF0329.jpg" width="320" /></a></div>
<div style="text-align: center;">
Original: you can read all the serial numbers (even though the picture is blurry, sorry).</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-0r8Utq_PAhA/UbDPe9oOBNI/AAAAAAAADCU/rdrc9JeGYfE/s1600/DSCF0330.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="239" src="http://3.bp.blogspot.com/-0r8Utq_PAhA/UbDPe9oOBNI/AAAAAAAADCU/rdrc9JeGYfE/s320/DSCF0330.jpg" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
Fake: numbers are cut off; missing some numbers.</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-t-BrU4gM33k/UbDPgArB3DI/AAAAAAAADCc/e9fa8adTecs/s1600/DSCF0334.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="239" src="http://1.bp.blogspot.com/-t-BrU4gM33k/UbDPgArB3DI/AAAAAAAADCc/e9fa8adTecs/s320/DSCF0334.jpg" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
Original: logo is laser etched right into the aluminum surface.</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-s-ZaTw455-8/UbDPhMIzxOI/AAAAAAAADCs/8JRz5pri5Nc/s1600/DSCF0335.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="239" src="http://3.bp.blogspot.com/-s-ZaTw455-8/UbDPhMIzxOI/AAAAAAAADCs/8JRz5pri5Nc/s320/DSCF0335.jpg" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
Fake: logo is painted.</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-tgptCT52rXs/UbDPgq-n6QI/AAAAAAAADCk/MPUPbcepr-c/s1600/DSCF0338.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="239" src="http://4.bp.blogspot.com/-tgptCT52rXs/UbDPgq-n6QI/AAAAAAAADCk/MPUPbcepr-c/s320/DSCF0338.jpg" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
Original: darker, glossy.</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-zo8nDq6euTQ/UbDPh8lzUZI/AAAAAAAADC0/LE_axocH-1s/s1600/DSCF0339.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="239" src="http://1.bp.blogspot.com/-zo8nDq6euTQ/UbDPh8lzUZI/AAAAAAAADC0/LE_axocH-1s/s320/DSCF0339.jpg" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
Fake: type is different, lighter gray, matte.</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjCBTsa2ru53XPfKR2Qrb1zrbtPvyD7FtjARcq8JxJrvT0F5yYvahg87AyN03v9IIEnQUKVAQKwu2Fx7j2IPsQJMLD3fqUoqLjkRaik_u1iqROlX871P129-QqxctWKv7ZRmYQJRQ/s1600/DSCF0342.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="239" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjCBTsa2ru53XPfKR2Qrb1zrbtPvyD7FtjARcq8JxJrvT0F5yYvahg87AyN03v9IIEnQUKVAQKwu2Fx7j2IPsQJMLD3fqUoqLjkRaik_u1iqROlX871P129-QqxctWKv7ZRmYQJRQ/s320/DSCF0342.jpg" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
Original: inside battery compartment cover. Note recycling warning, 3 clips to stabilize cover. Release mechanism still works after many years.</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-CWa3NXQVTQA/UbDPjscZ4LI/AAAAAAAADDI/Pm2n5PJh_bY/s1600/DSCF0344.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="239" src="http://1.bp.blogspot.com/-CWa3NXQVTQA/UbDPjscZ4LI/AAAAAAAADDI/Pm2n5PJh_bY/s320/DSCF0344.jpg" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
Fake: mechanism is extremely stiff and barely works, nothing molded on the inside.</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-xzIYSMXoevQ/UbDPjlNOdDI/AAAAAAAADDE/QkRA8LBFF7s/s1600/DSCF0345.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="239" src="http://3.bp.blogspot.com/-xzIYSMXoevQ/UbDPjlNOdDI/AAAAAAAADDE/QkRA8LBFF7s/s320/DSCF0345.jpg" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
Original battery hologram.</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://4.bp.blogspot.com/-EaaMkPl0Axo/UbDPkejvQ3I/AAAAAAAADDU/9TIMXE-EL84/s1600/DSCF0346.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="239" src="http://4.bp.blogspot.com/-EaaMkPl0Axo/UbDPkejvQ3I/AAAAAAAADDU/9TIMXE-EL84/s320/DSCF0346.jpg" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
Fake battery hologram.</div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-JoQL1LvFbo8/UbDPl5VmxZI/AAAAAAAADDg/YNFGcz4hJpM/s1600/DSCF0350.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="239" src="http://3.bp.blogspot.com/-JoQL1LvFbo8/UbDPl5VmxZI/AAAAAAAADDg/YNFGcz4hJpM/s320/DSCF0350.jpg" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
Original: still has a little rubber plug in that access hole after years of handling.</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-BmYjqBLXLmk/UbDPlhLddUI/AAAAAAAADDc/cCQOyOZOgNA/s1600/DSCF0349.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="239" src="http://2.bp.blogspot.com/-BmYjqBLXLmk/UbDPlhLddUI/AAAAAAAADDc/cCQOyOZOgNA/s320/DSCF0349.jpg" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
Fake: rubber plug stuck way out, fell out immediately with the gentlest handling, now it's around here somewhere...</div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/-kpIrDKoc_-Q/UbDSjoQKCcI/AAAAAAAADD0/1c_8L06ppFI/s1600/DSCF0352.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="239" src="http://1.bp.blogspot.com/-kpIrDKoc_-Q/UbDSjoQKCcI/AAAAAAAADD0/1c_8L06ppFI/s320/DSCF0352.jpg" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
Back covers. Note the raised logo and carrier on the original (right). Ignore the missing dark glass over the display on the old phone, I broke that many years ago...</div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh7wY-oUpRcR1AynzfkE35nAsKHGMkkPJxXXszSI7UuXXtyj0jETcaY8wq1GSPE2JI5MTwVgdq0LguYVwq38QqeJq5NTP0h07Cv8KX21wr3tVW4eX77628gwS4urKr9M2SDytLEIQ/s1600/DSCF0354.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="239" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh7wY-oUpRcR1AynzfkE35nAsKHGMkkPJxXXszSI7UuXXtyj0jETcaY8wq1GSPE2JI5MTwVgdq0LguYVwq38QqeJq5NTP0h07Cv8KX21wr3tVW4eX77628gwS4urKr9M2SDytLEIQ/s320/DSCF0354.jpg" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
The cover of the manual.</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://3.bp.blogspot.com/-cgAEebOel00/UbdwtlkyjsI/AAAAAAAADEc/mpUjKw8ULmQ/s1600/DSCF0360.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="239" src="http://3.bp.blogspot.com/-cgAEebOel00/UbdwtlkyjsI/AAAAAAAADEc/mpUjKw8ULmQ/s320/DSCF0360.jpg" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
The printing inside the manual.</div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="http://2.bp.blogspot.com/-d99asrRW35s/UbDSmb2fxVI/AAAAAAAADD8/I8yEfgriaDA/s1600/DSCF0316.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="239" src="http://2.bp.blogspot.com/-d99asrRW35s/UbDSmb2fxVI/AAAAAAAADD8/I8yEfgriaDA/s320/DSCF0316.jpg" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
Under the right lighting you can see that the battery compartment cover on the counterfeit phone is completely mismatched to the rest of the case. Wow! Crap-tastic!</div>
Paul R. Pottshttp://www.blogger.com/profile/04401509483200614806noreply@blogger.com4tag:blogger.com,1999:blog-21054185.post-20168929043481384192013-06-02T00:50:00.004-04:002013-06-02T00:51:42.192-04:00The Situation (Reporting from an Undisclosed Location)<p>I've sequestered myself here for a few days to try to concentrate on some Objective C and iOS programming.</p>
<p>I had an interview last Thursday. You can read about some of the technical aspects of the interview in this post on my programming blog, <a href="http://praisecurseandrecurse.blogspot.com/2013/05/an-interesting-interview-question.html">Praise, Curse, and Recurse</a> (warning: extreme programming geek content!)</p>
<p>I'm very grateful to my undisclosed friends for letting me work here in this undisclosed location, as well as feeding me some delicious undisclosed meals!</p>Paul R. Pottshttp://www.blogger.com/profile/04401509483200614806noreply@blogger.com0tag:blogger.com,1999:blog-21054185.post-2568327792969423872013-05-28T22:49:00.001-04:002013-05-28T23:27:40.410-04:00WIC and Nutritional Advice<p>So, some followup. Grace finally found that Kroger had, in fact, ordered low-fat goat's milk, but no one got back to her to report that it had come in. She had gone to the store to check on it several times, but it had not been put on the shelf. She went back several times, and one morning found someone actually stocking shelves. She asked this person, who said "Oh! You're the one who ordered that goat's milk? It's been sitting in the back." So they get credit for actually ordering what we wanted, but pretty much a zero for customer service.</p><div><br></div><div><p>Grace took home 12 quarts, out of the 22 she was covered for that month (all that wouldn't fit in our refrigerator). And we started doing our best to figure out how to feed the kids a dozen quarts of low-fat goat's milk. We decided that making a batch of Cream of Wheat would use up a whole quart, so we did this several times. The kids ate that up. We had hoped that goat's milk wouldn't set off their milk allergies, but it did -- in fact, the seemed to react more than they did to cow's milk -- and so we had several sick kids, including a baby with an up-all-night ear infection. So we had to give up on the goat's milk. They won't cover the fortified almond or coconut milk that we normally drink.</p></div><div><br></div><div><p>Joshua is very small and so we took him to see a pediatric endocrinologist for evaluation. She does not seem to be concerned with the results of his blood tests and bone scan, although we are concerned, and we'll be looking for another opinion. He has a follow-up appointment set to find out why our two-year-old is almost as tall, and weighs more, than our four-year-old. Meawhile, the nutritionist in her office got back to us with some advice on how to deal with a child who does not seem to be eating an adequate number of calories. I'm looking at a handout entitled "How to Increase Calories." It starts out: "If your child is having eating problems, it's important to make every bite count. Getting in adequate calories can help your child maintain weight and continue to grow well."</p></div><div><br></div><div><p>It then goes on to list a number of different types of food to consider adding to your child's diet, including butter and margarine, whipped cream, and whole milk and cream. We already do some of this -- for example, we've been making him mashed sweet potatoes with butter and sour cream. He doesn't seem to have the milk allergies that his siblings have, but having a lot of milk in the house is problematic because they will demand what he is drinking.</p></div><div><br></div><div><p>We'll skip the margarine, to avoid soybean oil and hydrogenated fats. Sweetened whipped cream in the house leads to a feeding frenzy, but we might be able to use whipping cream unsweetened in Joshua's food. WIC won't cover whipped cream, butter, or whole milk of any kind. Cheese is covered, but in pretty limited quantites. We're definitely using all they will provide and then some.</p></div><div><br></div><div><p>The nutritionist also recommends cream cheese, sour cream (we already use it; not covered), salad dressing and mayonnaise (not covered -- but Grace makes deviled eggs for the kids pretty frequently, and tuna salad with mayo).</p></div><div><br></div><div><p>Next up are sweets. Honey, jam, and sugar, granola, and dried fruits. These are a little problematic for us since Joshua already has four stainless-steel crowns due to extensive tooth decay. He eats carbohydrates and especially simple carbs preferentially to anything else. <span style="-webkit-tap-highlight-color: rgba(26, 26, 26, 0.292969); -webkit-composition-fill-color: rgba(175, 192, 227, 0.230469); -webkit-composition-frame-color: rgba(77, 128, 180, 0.230469); ">But we will try to figure out more ways to use these as an ingredient. </span>I carry around granola bars and Balance bars in the car just so if I happen to have him in the car, I can give him a bar then -- instead of keeping them in the house, where the other kids would find them and eat the whole box.</p></div><div><br></div><div><p>Our plan for getting more calories into the kid involves the following: sour cream with meals whenever we can figure out a way to work it in. Sour cream and honey for a daily snack. Three or more desserts a week, including a jello dish made with mayonnaise one night, cookies made with nut butter such as cashew butter, chocolate chips, and garbanzo beans one night, and rice pudding (we might want to make it with almond milk so the other kids can eat it without setting off their allergies). He (but not the other kids) can have half-and-half or even heavy cream in place of milk. WIC covers none of this.</p></div><div><br></div><div><p>Grace has a saying about bureaucracies -- they can serve average needs really well. But they are, pretty much by design, incapable of serving special needs. Joshua isn't going to miss out on anything we think he could benefit from because of this -- we will cover what he needs with our regular SNAP benefits, as much as we can, and we'll figure out how to cover the rest. But it is troubling that they can't accommodate food allergies and special nutritional recommendations for members of the group that the whole program is designed to help.</p></div>Paul R. Pottshttp://www.blogger.com/profile/04401509483200614806noreply@blogger.com0tag:blogger.com,1999:blog-21054185.post-17371653104588359922013-05-27T12:30:00.000-04:002013-05-27T12:30:37.637-04:00The Situation (Day 78)<p>Happy Memorial Day!</p>
<p>I think my numbering got off-by-one a while back so I've gone through and adjusted the day numbers on some of these posts, assuming that day 1 was the Monday of my first full week of unemployment -- in other words, the first missed work day.</p>
<p>Yesterday my friend Bill visited us at our home here in Saginaw with his wife and their young child. I haven't seen Bill since... hmmm, I think it's been 4 years, since the last college reunion I attended? And then before that, it was much longer. He brought us some pies from Fuzzy's Diner, and we fed everyone baked beans we made from some of the pinto bean stash in our root cellar, flavored with maple and bacon and cooked overnight in a cast-iron dutch oven at 200 degrees. They were delicious. While our kids and his kid and some neighborhood kids ran around and got filthy, the best way to spend a sunny spring day, the grownups also got filthy -- we got through a bunch of garden work, which was very satisfying. Bill and I got to spend a little while playing music together -- he's a very talented guy. That was great. It was a little reminder that even in the midst of the stress and angst of The Situation, with stress over trying to find work, keeping our mortgage paid, collect benefits, and figure out what to do next, it is still late spring and the world is alive and beautiful and we are meant to be alive and enjoy it.</p>
<p>The Michigan unemployment office waits for no man (or woman), and MARVIN does not count holidays, so I certified online today. There is good news there. The online system says I will be paid for the previous two weeks. That will mean I've collected eight weeks of unemployment compensation out of the 20 I'm eligible for. No word, yet, on whether I will be paid anytime soon for the 3 calendar weeks that were withheld. If I'm never paid for those weeks, I <i>think</i> that it means I still have them in reserve, and could still collect them as calendar weeks 21-23 -- that would be the last week of July and the first couple of weeks of August.</p>
<p>Let's hope it doesn't come to that. If I'm not employed by then, though, we will be very low on cash, for things like water and electric bills and gasoline. Getting paid now for those missed weeks would really help with that.</p>
<p>Not getting paid now for them now could also mean that I could collect them later in the benefit year, if, G-d forbid, I wind up hired and laid off again. Let's hope <i>that</i> is really, really not necessary!</p>
<p>Please keep our family in your thoughts this week, as I have an interview tentatively scheduled for Thursday, and several employers I'm waiting to hear back from. I'll be off-line, devoting the rest of today to celebrating and remembering our lost loved ones.</p>Paul R. Pottshttp://www.blogger.com/profile/04401509483200614806noreply@blogger.com2tag:blogger.com,1999:blog-21054185.post-57731167336197341612013-05-24T20:34:00.004-04:002013-05-25T16:10:48.930-04:00The Situation (Crisis, Opportunity, and Why I Am Not a Disabled
Electrical Engineer)<p>Nothing will test you, baffle you, and frustrate you like a period of unemployment. It's definitely one of those crises-slash-opportunities. I've had this tendency to see (and write about) only the "crisis" part. But it's honestly just now really starting to dawn on me that there is an opportunity aspect of this. I mean, I've known this intellectually, but it's been hard to <i>feel</i> it. I think I'm finally starting to feel it. And that presents, oddly, another crisis.</p>
<p>Today is a gorgeous spring day. Our neighborhood is this quiet, walkable boulevard with friendly neighbors, filled with flowering trees. I'm in a gorgeous old house. But at the same time this place is an economic shithole, in the grip of a terrible recession. A mile from her is one of the highest-crime neighborhoods in the country. There are hundreds of houses standing empty with foreclosure notices on the doors. It's kind of a mind-fuck, really, the disjunction between the apparent wealth and beauty here and the poverty behind the facade -- which, right now, is starting to include my own. This place looks like an opportunity, but it's actually mostly a crisis.</p>
<p>We're thinking very hard about whether we will have to leave Saginaw. After 3 years here, the kids have settled in, and made friends, and we've started to establish a rhythm and pattern to life here. We've made improvements to our house. We've put money and love into our gardens.</p>
<p>I don't want to go. I want this to work out. We chose this place, to be close to family, to find affordable real estate and space for our children, and to do what things we could to try to reverse the decline, the flight from what was a major American city.</p>
<p>If we do have to leave, it will be under a dark cloud; we'll have lost everything we put into our home, most likely, and more. We'll be demoralized; the kids will lose the ties they've made here, the small roots they've put down. It will feel like a failure. I've never been one to be excited about the road more taken. And we'll be treading the path taken by fifty thousand people before us, over the last fifty years or so.</p>
<p>There is good news on the job search front. I've been able to have good discussions with some hiring managers who are willing to consider a work arrangement that would help us stay in our home. The details are yet to be determined, but it's encouraging to have a manager suggest that they might be willing to have me in the office part of the work week, and work from home for the rest of the week. One guy even asked me what kind of support I would want from the company to set up such an arrangement. That actually baffled me. I just sort of went "ummm, ummm, what?" It took me a while to even <i>understand</i> that he was suggesting his company might chip in to help cover accommodation, for staying overnight, or a mileage allowance to pay for gas. In other words, he seemed to want to make the job attractive to me -- by offering perks! Isn't that weird?</p>
<p>Michigan has been beaten down. I've been beaten down too. When I set up my arrangement to work at home for my last employer, I resolved that I would not ask for any special concessions, since I had asked for and gotten working from home as a special privilege. I bought the computers I would need for my home office myself, and paid for the network gear and high-speed internet and separate phone line and desk and oscilloscope and logic analyzer. When I did have to travel to the office, I didn't submit a reimbursement request for mileage. I endured the jokes -- like when I mentioned in a conference call that I would be on vacation, the snarky comment from a co-worker that he thought I was already on permanent vacation. (Um, when I'm on vacation, I don't actually spend eight hours a day writing C++. OK, well, Haskell, maybe, but not C++).</p>
<p>I've forgotten that I'm actually the one in charge of the job search and the job interview and that I should be asking for what <i>I</i> want at this stage, not bending over backwards to accept less -- off the bat. I didn't used to be this way. Michigan has beaten me down. Thirteen years of declining wages -- in terms of cost of living, my earnings peaked in 2000 and have been on a gradual downward trend ever since -- have beaten me down. I haven't even been <i>asking</i> for what I need to feel rewarded and to be secure, because all around me I've heard, year after year, "you should just feel fortunate to have a job." If one takes that at face value for long enough, one starts to believe it, and of course then suddenly <i>not</i> having a job feels like all crisis, and no opportunity.</p>
<p>In addition to managers, I am talking to several recruiters every day now -- so many that it's becoming kind of overwhelming, actually. This really started happening largely after posting my resume on Monster and CareerBuilder. I've gotten on some mailing lists, apparently. I don't have a lot of experience working with recruiters. Grace has advised me to give serious thought to every conceivable offer, and to let the other guy be the first ones to say "no, it's not a good match." I'm really unaccustomed to pursuing more than one job opening at a time, so I'm in alien territory here, but I think it's basically good advice.</p>
<p>One of those "no" answers happened today -- I was being solicited for a four-month contract development position with a large bank in New York City. I had initially told the recruiter I wasn't interested in moving to NYC, leaving my family here, for a short-term gig. But Grace, it turns out, has a friend who was willing to provide a place to stay. So I got back in touch with the recruiter and said "let's talk about it some more." So I had a basic phone screen about object-oriented programming and C++, and then I got the dreaded question about salary requirements. I was thinking I wanted $125 an hour, but what popped out of my mouth was that I wouldn't really consider an hourly, W-2 gig like this in New York for less than $75 an hour. He argued that I was asking too much and wanted me to lower my requested rate. Honestly, I really wasn't expecting that -- I thought the rate I was quoting was, if anything, very low. We sort of sputtered at each other for a while; he talked about how if this 4-month contract worked out well, they would be able to turn me around quickly into a new one. But his reaction to my rate was pretty clearly the "no" that my wife was talking about.</p>
<p>After that exchange I was curious as to just how hard he was trying to screw me, so I asked around a bit. I don't have detailed numbers, but from what I was able to tell, developers who do this type of hourly work in NYC typically average something like $90 an hour, but experienced developers can make more than that. Sometimes a lot more. They can often also get a per diem for this sort of work to help cover expenses. So it's pretty clear he was punking me; I think the organization in question is an intermediary, and they would collect $120 an hour or more for my services from the bank, and then pay me half that. Sorry, not interested. Actually I'm not sorry at all that I'm not interested. What I'm sorry for is that I wasted so much of my own time talking to a dickweed.</p>
<p>I'm also sorry that I was under-valuing myself, and by extension my peers. I've really got to stop doing that. I need to remind myself that the kind of work I do could be the keystone of a product or a service that could easily net a business a million dollars, or more, in a given year. I'm ready to support that product, to work late to add features requested at the last minute, to quickly turn around bug fixes. It isn't crazy to want a measure of financial security in exchange for that. The conversations with recruiters have had a tendency to break down when it gets to the "let's discuss compensation" stage. I'm really sick of hearing that the senior, great, plum, choice, wonderful job from the fabulous employer the are trying to sell me on couldn't possibly stretch to cover the salary I'd like. I'm also sick of hearing how they just flatly would never accommodate a telecommuter, or a part-time remote worker. It's funny how a lot of these companies are perfectly comfortable out-sourcing huge swaths of their software development work to workers in Bangalore, workers they rarely if ever meet in person, in a time zone nine and a half hours away. But ask them to let an experienced American worker build code from a home office 100 miles away and they get... distrustful. Just tell yourself "I'm just lucky to have a job," right?</p>
<p>The kind of work I get called back for these days, around here, is embedded development. I'm a good programmer, and an experienced programmer. (Those two things are actually orthogonal to each other; I've met plenty of experienced developers who don't seem interested in their craft and who churn out really ugly code). Embedded work is more typically done by people with EE degrees or Computer Engineering degrees, which combine some Computer Science and Electrical Engineering. I sort of fell into embedded work by accident, when a friend of a friend needed some assistance understanding low-level Macintosh programming, to get an audio PCI card working. Really, my interest in hardware goes way back -- to childhood, when I got Radio Shack 150-in-1 electronic kits as birthday gifts, or build-your-own-digital-computer kits, or took apart reel-to-reel tape decks, to using an FM receiver to hear music generated by the unshielded radio frequency interference from the circuitry of a TRS-80, to wiring AppleTalk networks in college, to fixing broken telephone answering machines with a soldering iron, and so on, and so on -- but I never learned electronics the way an Electrical Engineer learns electronics.</p>
<p>Anyway, back to that audio PCI card and its firmware, written in C and Motorola 56301 DSP assembly language. I was able to help get that working, and that's sort of the work I've been in since. When I talk to a recruiter I've had to make sure that I clarify that no, I don't have an EE degree, or a B.S. degree at all, because to some employers that's become a critical issue. Also, I've been asked to specify my college grade point average a lot, and even submit a college transcript. I feel I should point out that I have not actually had to submit a transcript or specify my G.P.A. to get a job since -- my first job after college? It's been a long time. I was not terribly focused on good grades. I learned the most in classes when I sometimes got quite terrible grades. But I have a B.A. from a liberal arts school. My major was English. I took a minor in Computer Science because I loved programming. If the recruiter seems to know anything about programming I might talk about how, as a practical matter, I've rarely needed math beyond algebra to solve programming problems, even ones that sound initially complicated, like smoothing sampled compass headings using a decay factor based on the natural log of the current velocity. Or I might talk about how useful it has been to be able to write good documentation.</p>
<p>A select few seem to get it. They grok the idea that someone can have general intelligence and a willingness to work hard and an aptitude for solving problems, and that this might make someone a valuable employee. They're willing to consider someone who isn't a perfectly square peg, and in fact someone who might help carve out a differently-shaped hole.</p>
<p>Over the last decade or so, in the years I've spent actually getting things done, it doesn't typically occur to me to think of myself as disabled hardware engineer or a failed computer scientist. I feel successful, in fact, for the most part. I've written a lot of code. A lot of it has shipped out in products. And you need only take a look at my bookshelves to know I love to learn about computer science and particularly programming languages, both current and historic. I've been known to read patents for fun. I don't have a Ph.D. in Computer Science, but I feel a great affinity for those folks -- particularly the ones that specialize in language design and implementation, and I've taught myself a few things over the years.</p>
<p>I admire what EEs can do. I can't design a power supply. Sometimes I wish I knew more electrical engineering; I'd relish the opportunity to be truly mentored by an EE, or take some classes to try to get my math background up to snuff, if I could do that part-time. At one point I had some idea how an adder was build out of logic gates, but that class was a long time ago. These days I understand hardware in terms of registers and bits and interrupts and clock cycles and byte lanes and cache and memory-mapped I/O, which are really abstractions on top of real hardware that can be incredibly reliable but often is flaky and requires reading a lot of the "oops, we goofed, here's the workaround" notices that are euphemistically called by chip makers "errata." I can get a lot of useful information out of data sheets, but there is extra information in them that is targeted at people with training and experience other than mine. I'd like to understand that world better. I'd like to be better at soldering.</p>
<p>The EE's I've worked with could help me determine that, yes, that peripheral on a board wasn't working because of an actual hardware problem. They could read schematics more easily than I could, and make use of their deeper knowledge of the hardware to help me what the code needed to do, to work around issues. I could write clean, functional, well-factored, well-commented C code to do what needed to be done. Good EEs often aren't good at that, and more importantly aren't necessarily that interested in becoming good at it; to them the elegance of the circuitry might be important, and the code an afterthought. I've tutored EE programmers in how pointers and classes work. I've learned a bit from them about pull-ups and pull-downs, RS-232, balanced and unbalanced circuits, noise and capacitance, clocking and PLLs and clock dividers, and the practical aspects of driving various peripherals.</p>
<p>To me it's often distressing what they don't know -- and how poorly many of them write, whether we're talking about code or comments or specs or even a business e-mail. To me those things are pretty much all of a piece. But I try not to judge. We have our little jokes. They talk about how important it is for English majors to learn to say useful phrases, like "do you want fries with that?" I talk about how you know an electrical engineer likes you, because when you are having a conversation, he stares at your shoes instead of his shoes. Ha-ha, only serious.</p>
<p>They have a different focus. To them it's probably often distressing what I don't know. Since I've been writing code since childhood, code is my element. Working with EEs has actually seemed like a pretty good symbiotic relationship to me. Code is a different thing, and software architecture and design is not hardware engineering. I don't come at writing code quite like an EE comes at designing a circuit. I need to remind myself that this has been a good thing, and I think a lot of teams that build embedded programs could benefit from at least one engineer who specializes in software architecture. I find that position "at the boundary" between code design and hardware a very interesting place to live and work, especially as I'm always trying to push my own boundaries -- to move towards using more expressive, higher-level languages and tools, to advance my understanding, to get more leverage, to get a competitive advantage. I just wish more managers and EEs could see someone with my skills as whole asset, a "resource" as they say, and complementary to them, and not a broken thing with a piece missing who has somehow managed to limp into a job. And it's not just them I have to keep convincing.</p>Paul R. Pottshttp://www.blogger.com/profile/04401509483200614806noreply@blogger.com0