Fork me on GitHub

Movies on Stellaris - Part 1

Posted by
Matt Sieker
in Stellaris on Sun 17 February 2013

So, I got a Stellaris Launchpad the other day, and was wondering a project to do with it. After a bit of thought, and looking at the various parts I had sitting around, which included a TFT LCD Display with an SD Card, and thinking back to a project I'd seen before, I decided for my first project, I'd see how fast I can pull data from the SD card and write it out to the LCD (both over SPI).

The video file? Since I didn't want to write any complex decoding at this point on the board, I decided to have a basic format that's RLE encoded. And what video would encode quite well with RLE? I'm glad you asked:

Simple black and white. I'm not going to worry about the audio right now.

The first step: How on earth to read the video, and then write it back out encoded? The answer? OpenCV and Python. A half dozen lines to read the video frame by frame, smash down the color depth (not really needed right now, but it's there in case I want to process the image further at some point), and then shove the image into a byte array.

For testing, I wrote a small C# program that reads in the files one by one, creates an image out of them, and shoves them into a picturebox on a form. Also simple.

A sanity check: Can I take the video frames, encode them via a brain dead method, and read them back out?


Yep. Although this is about the most brain dead method possible: Each pixel is two bytes, one indicating a run length of 1, the other storing the value of the pixel.

Now, let's try using RLE encoding to get something out of it. The RLE variant I use works as follows:

Use to get byte runs in the data

Each group back from itertools:
   Is the length greater than the minimum run?
      Flush the small run buffer
      Write out run length + 128
      Write out byte value for run
      if small runs buffer + current run >=127?
         Write out run length
         Write out small runs buffer
      Add run to small runs buffer

Very rough psuedocode. Essentially, if a byte value > 127, the byte value - 128 is the length of the run, with the next byte being the data byte. If it's < 127, the next n bytes are treated as literal. There's some added logic if the run length is greater than 127 where it figure out how many 127-length runs to write out.

Overall efficiency isn't the best of course, since this is a rather simple method of compression. The largest frame at 23011 bytes is this one:


Lots of grays and shading. Closely followed by this one at 23009 bytes:


Once again, rather expected worst case. I've attempted some dithering in the pipeline to help. Overall size for 6568 frames is about 40MB, reduced to a resolution that will fit on the LCD display I have. The video runs at 30fps, I plan on reducing that to 15fps at first, which would reduce the size to roughly 20MB. So, my target transfer rate off the SD card over the SPI bus is about 93KBps. Some preliminary research puts this in the "very possible" range.

I plan on joining together the loose files in a rather simple format of (file length)(file data)(file length)(file data) , etc, with the final length being 0. This gives roughly 6.5kb of overhead, since I can cram the file lengths into 16 bit ints.

Next up, poking at the chip itself.

There are comments.

Using Visual Studio for a node.js project

Posted by
Matt Sieker
in Coding on Sun 22 July 2012

I recently ran across the need to be able to work on some node.js code from within Visual Studio (The rest of the project is .Net, with node being used for a few unique features). Previously I had done this work over a SSH session to Emacs, but I decided I wanted something more integrated.

My first task was to make a project file that would work in Visual Studio, with the build tasks being whatever I wanted to do to my Javascript, and nothing else. To do that, I first chose to create a blank MVC project, then delete all of it's contents and references. Then I opened up the project file and removed the following line:

<Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" />

And then added the following:

<Target Name="Build">
<Target Name="Clean">
<Target Name="Rebuild" DependsOnTargets="Clean;Build">

Then the project was reloaded in Visual Studio to make sure all was well. I then added my Javascript files to the project, ran a build, and make sure Visual Studio didn't complain about anything.

Next up was the ability to run JSHint against the code in my project, while excluding pre-minified files. I already had node.js on my path, so I installed jshint as a global package with

npm install jshint -g

After that, I grabbed the jshint reporter that worked with Visual Studio from the node-jshint-windows project and dropped it into my build libraries folder. Then I modified the Build task as follows:

<Target Name="Build">
        <ToLint Include="@(Content)" Exclude="**\*.min.js" />
    <Exec Command="jshint --reporter &quot;$(SolutionDir)libraries\build\vs_reporter.js&quot; %(ToLint.Identity)" ContinueOnError="false" />

This takes any files within the Content ItemGroup created by Visual Studio, excludes pre-minified files, and then runs jshint against them. The errors will then show up in the Error Output window just like any other sort of build errors. If desired, ContinueOnError can be changed to true, if you would rather JSHint errors be reported as warnings.

There are comments.

Using the Sparkfun LCD shield with the MSP430 - Part 2

Posted by
Matt Sieker
in MSP430 on Sun 15 July 2012

First, I now have the code for this on Github: MSP430NokiaLCD. Several things have changed in this code from the the earlier posting:

  1. SPI bit-banging loops are unrolled. I wish there was some way I could get rid of the if in there...
  2. Some drawing functions are now in place: LcdDrawPixel, LcdDrawLine, LcdDrawRect. Most of these functions come from James Lynch's documentation mentioned in the previous post, modified to work with my bit-banging routines.
  3. After finding the Universal Color LCD graphics library on the 43oh forums, I initially tried swapping out my library with this one, and seeing if I could get
    it to work with my LCD. No dice. However, looking through it, I modified my library from 12-bit color to 8 bit palletized color. The loss of color depth probably isn't noticeable on these screens that much. And there's less data to move across the wire.

I've got a bouncing square going, and I think I have it animating about as fast as I can without too much tearing or blurring. I think it's about as best as I'm going to get out of this LCD:

Here's cycling through the palette:

Compared to the previous 12 bit code:

It's a decent improvement.

Next, I'll probably work on text rendering. I seem to recall a minimal font where each letter was made up of a handful of pixels, but now I can't remember what it's called. I might just use Droid Sans Mono for it.

There are comments.

Using the Sparkfun LCD shield with the MSP430 - Part 1

Posted by
Matt Sieker
in MSP430 on Sun 15 July 2012

After declaring myself a microcontroller hipster, and deciding the Arduino is too mainstream (That and I dislike the software, there's no debugging, and it produces rather bloated code...), I decided to pull out my TI Launchpad I got back when they were first announced, and trying to do some stuff with that.

I was at a local electronics store the other day, and I noticed they had Sparkfun's Color LCD Shield. Granted, it's for the Arduino, but I figured I could use it anyhow just by plugging in wires.

After many attempts at getting it working, I borrowed my girlfriend's Arduino, and started fiddling with the provided code samples. After a bit of experimentation, I discovered that it was a Phillips controlled LCD, and not an Epson like I thought at first (Three days of trying to get that working...). Once I had that sorted, it was rather easy going.

Since there is no good reference I can find for using the controller with an MSP430, I've decided to post the relevent bits here. Keep in mind, this is for the Phillips version. It shouldn't be hard to modify for the Epson interfaces. It's based on James Lynch's documentation, so consult that if you wish to adapt this code for the Epson LCDs. One thing I noticed using the clear LCD code from there, was that the last row on the LCD would still be garbage after filling, so instead of (131*131) I use (131*132), and it clears all the rows.

#define LCD_P_SLEEPOUT  0x11
#define LCD_P_INVON     0x21
#define LCD_P_COLMOD    0x3A
#define LCD_P_MADCTL    0x36
#define LCD_P_SETCON    0x25
#define LCD_P_DISPON    0x29
#define LCD_P_RAMWR     0x2C
#define LCD_P_CASET     0x2A
#define LCD_P_PASET     0x2B

#define     LCD_OUT     P1OUT
#define     LCD_DIR     P1DIR
#define     LCD_RES     BIT3
#define     LCD_CS      BIT4
#define     LCD_CK      BIT5
#define     LCD_MOSI    BIT6

#define     PIXELCOUNT  (131*132)/2

void WriteMessage(unsigned int message){
    unsigned char width=9;
    LCD_OUT &= ~LCD_CS;

        LCD_OUT &= ~LCD_CK;
        if(message & 0x8000){
            LCD_OUT |= LCD_MOSI;
        } else {
            LCD_OUT &= ~LCD_MOSI;
        LCD_OUT |= LCD_CK;
        message <<= 1;
    LCD_OUT |= LCD_CS;

void LcdWrite(unsigned char command, const unsigned char* data, int length){
    unsigned int out=0;
    unsigned char i;
    out = command << 7;

        out = (1<<15) + (data[i] << 7);

void LcdInit(){
    LCD_OUT &= ~LCD_RES;

    static const unsigned char colmodData[] = {0x03};
    static const unsigned char madctlData[] = {0xC8};
    static const unsigned char setconData[] = {0x30};

void LcdClear(unsigned int color){
    unsigned int i;
    static const unsigned char pasetParams[] = {0,131};
    static const unsigned char casetParams[] = {0,131};
    LcdWrite(LCD_P_PASET, pasetParams,2);
    LcdWrite(LCD_P_CASET, casetParams,2);

    WriteMessage(LCD_P_RAMWR << 7);

    unsigned int color1 = ((((color)>>4)&0xFF) << 7) + ((unsigned int)1 << 15);
    unsigned int color2 = ((((((color)&0x0F)<<4) | ((color)>>8))) << 7) +((unsigned int)1 << 15);
    unsigned int color3 = (((color) & 0xFF) << 7) + ((unsigned int)1<<15);
    for(i=PIXELCOUNT; i>0;i--){

A few notes about this code:

  1. I've tested this at 1, 12 and 16MHz, and have had no issue. I assume that all frequencies in between will work.
  2. Bitbanging is used, since the 430 I'm testing with has a USCI and not a USI. SPI with USCI only appears to support 8 and 16 bit transfers.
  3. Clearing the screen, even at 16MHz, takes a disappointing amount of time (200ms or so, judging by the LED I have that toggles every time it clears). I need to fiddle with the bitbanging code some.
  4. For some reason, colors appear to be in BGR ordering instead of RGB. Not a major issue, but still kinda strange.

I'm going to work on making this code faster, and get some graphics routines in, and make another post when I do

There are comments.

Thy Dungeoneer Part 5 - Good Graphics

Posted by
Matt Sieker
in Coding on Mon 21 March 2011

It's been too long since I wrote one of these, it's been kind of a busy week, and I really don't want to code while on pain medication,  I don't think the Ballmer Peak happens with Vicodin.

So, as hinted by some of the issues I had with visibility checking in my previous post, the structure the maze is generated in isn't the best to do actual gameplay in. Only the left and top walls are stored in a particular cell. This leads to having to go grab the adjacent cell and checking it's left or top properties to get the right or bottom of a cell.

Instead, I need to take that structure and expand it out, turning the maze cells into empty tiles, and the walls into occupied tiles. This also means each maze cell expands into three actual tiles.

First though, some refactoring, to turn this rough lot of Python into something more looking like it's been well coded. The maze generation stuff has been pulled out into it's own file for now, since that's pretty much done, and won't be edited. I'm also making a new level class to contain the entities (right now the player), the maze, and the tile map, so it more resembles a real game.

And now that's all done and debugged, I have something looking like this:


Doesn't look like much has changed, but there have been some plumbing changes. Next up, a rather simple change, making the maze scroll around the player, so the player is always in the middle. Rather easy to implement, just calculating where the player is, and then figure out how much to offset the drawing to put the player in the middle:


I also made the tiles a bit bigger, in preparation for the next change which is this:


Apologies to the Painterly Pack for the source of my testing tiles. It's looking more and more like a real game now.

There are comments.