Thursday, December 2, 2021

Bible Code Compression

Last night, I had an interesting thought about file compression. I have no idea if this idea is new or not. First, let me take you through the train of thought that led me to it. I began by thinking about embedding an image within some text. One of the simplest methods for this would be something like Base64 encoding. Unfortunately, upon seeing some Base64 encoded text, many people would recognize it and could easily decode it and get the original image back. Perhaps then one could encode the image using more natural looking text. Maybe grouping characters into two groups (one to represent a 1 and the other to represent a 0) and then generating a string of words where every nth character encoded a bit of the image. It seems unlikely that one could generate text in this manner that would look like real natural text. I then began to wonder if one could take an existing text document and create an algorithm that could extract an image from the text. For example, say one selected a section of the Bible. Then group the characters into two groups (maybe consonants are 1s and everything else are 0s). Could one then pick a number n such that the every nth character of the section of the Bible encoded a bit of the desired image? Probably not for a set section of the Bible, but the Bible is a long book. Maybe one could search the entire Bible for a section where this could be done. If this were possible, one could encode the image as a starting point in the Bible (and maybe a length) and an algorithm to extract it. Wait a second! That sounds like compression!

Let’s expand the thought from images and text to all data. Could we pick some long file (let’s call it the master file) in place of the Bible and search it in a similar manner for any file (or subsections of a file)? The basic algorithm would look something like:
-Take a file or subsection of a file.
-Pick an algorithm (like check every nth bit).
-Search through the master file using the algorithm to see if the file or subsection can be found.
-If it is found, the file or subsection can be converted into a location in the master file and an algorithm.
-If it is not found, try a different algorithm or smaller subsection and try again.
Unfortunately, depending on the sizes involved, such an algorithm could be pretty slow. I began thinking about what kind of file could be used as the master file. I first considered files that are large (1 GB+), publicly available, and would continue to be publicly available for a very long time. For a truly large master file, one could possibly use the Bitcoin blockchain. Then I thought about just generating the master file. I figured I could use a random number generator to do it. But wait. If I’m going to use a random number generator to create the master file, why create the file at all? I could just generate it (or parts of it) at runtime as needed. It turns out that a master file isn’t necessary. One can simply use a bit generator function like the C library function rand() with a set seed. An extra step then needs to be added to the algorithm to select a bit generator function and to possibly vary it if the file or subsection isn’t found in the generated stream of bits.

If the file or subsection can be found in the generated bits, I think the compression would be very good; however, the likelihood of finding the file or subsection in the generated bits is probably pretty low. Let’s do some simple estimations. First, let’s considered what a compressed file or subsection might look like:
-Bit generator ID: 1 byte
-Bit generator parameter: 4 bytes (this would encode where in the generated stream to begin)
-Algorithm ID: 1 byte
-Algorithm parameter: 4 bytes
-Length (in bytes) of data to be generated by decompression: 8 bytes
That totals 18 bytes. This means that, AT MOST, this algorithm could compress only 218*8 possible files or subsections! That’s not very many. Nearly half of the compressed data is the length of the original data, which is very likely to be pretty small, so most of those bytes will always be 0, reducing our realistic possibilities even more. The probability of compressing a file or subsection of length L bytes would be (210*8)/(28L). This can be simplified to 28(10-L). Notice that for values of L < 10, we get a probability greater than 1. It doesn’t make sense to try to compress files smaller than what we could possibly compress them to anyway, so we can just ignore them. Let’s look at a few different lengths to get an idea of the probabilities:

221.26 x 10-29
1001.81 x 10-217
10006.96 x 10-2385
50007.63 x 10-12018

Even at 22 bytes (50% compression if only 1 byte is used for length), the probability of compression is extremely low and it gets much lower from there. It is also worth noting that the bit generator ID and algorithm ID are not likely to use all 8 bits, further reducing our probabilities. After crunching the numbers, it looks like this sort of compression would not be viable, but exploring the idea was still interesting.

Monday, June 15, 2015

Adventures in Minecraft Mod Management

My nephews and I play Minecraft. I use the Magic Launcher instead of the regular launcher because I like how it applies mods. My computer is very old (a few months shy of 10 years old), so I need to use Optifine to get a frame rate that even approaches 30 FPS. I could just install Optifine directly into the Minecraft JAR file, but sometimes Optifine causes problems so I like to be able to disable it easily. I could maintain two versions of every Minecraft JAR, one with Optifine and one without, but that seems tedious. The Magic Launcher does a good job managing mods like Optifine but it falls short when managing mods for something like Forge. My nephews had me install a few Forge mods (OreSpawn, Pixelmon, and Wildycraft). Every piece of documentation I could find on Forge says to just put these mods into the Minecraft mods directory. If you only have one such mod at a time, this works perfectly well. I even found some forum posts mentioning that you could create subdirectories for different versions of Minecraft but that was of limited use. Whenever my nephews wanted to switch between Forge mods, I would be forced to rename or relocate mod files. I very much wanted an easy way to tell Forge which mods to load on any given launch of Minecraft. I suspected that a command line argument for this purpose might exist because it seemed like a great solution to what I perceive to be a common problem. Unfortunately, neither Google nor Bing nor searches of Forge's own Wiki turned up any information on the command line arguments I sought. At first, I started poking around the Minecraft launcher's JSON files to see if Forge's versions of those contained any settings I could use. I didn't find any but I did notice the minecraftArguments setting which contained the command line arguments passed to Minecraft. I figured that if I found any command line arguments for Forge, I could just tack them onto this setting for testing purposes.

While searching about online, I accidentally stumbled across the Forge GitHub page. I downloaded the source files and began digging through them to look for any command line arguments. Eventually, I came across the file which contained code for two command line arguments: --modListFile [file] and --mods [mod1,mod2,...]. I toyed around with --modListFile first as I thought using a list file would be the most convenient way to handle my situation. Unfortunately, the format for the mod list file is a bit different than I expected. It's a JSON file with a few settings, one of which stores a list of mod file names. It expects each mod file name to be in one of the following two formats A:B:C or A:B:C:D, where A, B, C, and D are strings not containing colons. Depending on whether or not you use the D, those names are converted into these file names: A.B.C.B-C.jar or A.B.C.B-C-D.jar. Unless you're willing to rename your mods to suit this format and its conversion, the --modListFile argument will not be useful.

The --mods argument is much more useful. It takes one argument that is a comma-delimited list of mod files with paths relative to the .minecraft directory. I am now keeping my mods in a subdirectory of .minecraft called ForgeMods, so a --mods argument for me might look like:

--mods "ForgeMods\1.7.10\Pixelmon-1.7.10-3.4.0-universal.jar,ForgeMods\1.7.10\Gameshark.jar,ForgeMods\1.7.10\FinderCompass-1.7.10.jar"

The double quotes are not required if there are no spaces in the mod list but I like to use them anyway so I don't have to worry about adding spaces later if need be. Also, note that I still maintain the use of Minecraft version subdirectories (1.7.10 in this case) but this is completely unnecessary. To test out the --mods argument, I added it to the minecraftArguments setting in the Minecraft launcher JSON for one of my Forge installations. (Remember that double quotes and back slashes need to be escaped when part of a JSON string.) I then loaded up the Minecraft launcher and ran that version of Forge. Success!

I opened up the Magic Launcher to add my new --mods command line argument but I ran into two problems. The latest version of Forge for Minecraft 1.7.10 (1.7.10-Forge10.13.4.1448-1.7.10 at this moment) does not install a JAR file. It seems to use some sort of "inheritsFrom" setting instead. The Magic Launcher is unable to use this installation of Forge. After some searching, I saw someone suggest using the "universal" Forge JAR (for this version of Forge) and have the Magic Launcher apply it as a mod to the standard version of Minecraft. This approach was successful.

I then proceeded to try to add the --mods argument only to find that the launcher does not support adding command line arguments for anything other than the JVM. I tried just adding the argument anyway, but Java was not fond of that. It failed with an "unrecognized option" or something of that nature. I considered trying to just switch over to using the Minecraft launcher full time but that launcher has this annoying habit of downloading (or just creating) a fresh copy of the "natives" directory every time I launch Minecraft. I think it's supposed to delete that directory when Minecraft closes but it doesn't do this on my computer, so I end up with a ton of these directories after a while.

I decided I would try to mod the Magic Launcher itself to see if I could get it to support the --mods argument. The Magic Launcher is some sort of self-executing JAR file. I'm not familiar with the technology, but 7-Zip was able to open it up as a JAR and extract the files. There was a problem though. The Magic Launcher JAR contains some file names that differ only in capitalization. For example, there are both an a.class and an A.class. In some file systems (case-sensitive ones), this is fine, but in a Windows file system (case-insensitive), this causes problems. 7-Zip handled this problem by renaming some of the files. Fortunately, none of the files I needed were any of the ones that were renamed. I used fernflower to decompile the class files into Java source files. I dug through the decompiled code, which wasn't easy because virtually every variable was named by just a letter or two, until I found the code that handled command line arguments for Minecraft. Luckily for me, this same file contained the code that handled the JVM arguments I mentioned earlier, so I would only end up needing to change one file: magic\launcher\ The class used a single ArrayList to store all of the command line arguments passed to Java and then to Minecraft. The only thing that changed an argument from a JVM argument to a Minecraft argument was where it appeared in the list of arguments (before or after -mainClass, I think). The class used a for loop to add the JVM arguments to the overall arguments list. During that loop, I added a check for "--mods" and stored the mod list into another variable if --mods was found. Later, if this variable wasn't null, I added "--mods" and it to the list of arguments. I used javac to compile my updated file into ap.class. Oddly enough, I had to manually add the throws clause to one of the methods in I don’t know if that was a problem with fernflower or just a question of compiler settings.

Adding my new ap.class file back into the Magic Launcher proved a bit difficult. Initially, I tried to just open the Magic Launcher in 7-Zip and drop in the updated ap.class file. This resulted in an error having to do with files having duplicate names. I assume this was related to the a.class-versus-A.class problem I mentioned earlier. Then I tried just using Java's jar program to update the ap.class file, but that program was unable to work with the self-executing JAR file format used by the Magic Launcher. I searched the Internet a bit for information on self-executing JAR files to see if, at the very least, I could just extract the JAR file from the Magic Launcher executable. I was unable to find any useful information quickly so I opened up a random JAR file in a hex editor to find out what file header it used (hexadecimal bytes 50 4B 03 04, which is probably the same as a standard ZIP header since I believe JARs are just renamed ZIPs). I then opened the Magic Launcher executable in a hex editor and searched for the JAR header. After locating it, I deleted all data before the header and saved this new file as a JAR. The Java jar program was now able to update the ap.class file. I considered trying to re-add the JAR to the executable but since the JAR works quite well on its own on my system, I just left it as a JAR.

The final step was to create configurations for each of my mods in the Magic Launcher, adding the appropriate --mods command line argument for each. It took me the better part of a day to do all of this but it was definitely worth it to no longer have to deal with renaming or relocating mod files every time my nephews decide to play a different mod.

Friday, September 6, 2013

Controlling a Computer Via Text Messaging (SMS)

This is an idea that I’ve had running through my head for ages, but I finally decided that I would attempt to make it a reality. I don’t have one of those fancy smart cell phones. Mine (a Kin ONEm) is a nice phone and has some limited Internet capabilities (IE Mobile 6), but it certainly can’t run any VNC-style software. It can send and receive text messages, though, and that is my gateway to controlling my computer. You may be aware of the fact that AOL Instant Messenger (AIM) is capable of sending and receiving messages via SMS (text messaging), but if not, just send a text message in the format Username: Message to 265-060. The recipient can then respond to your message and it will be sent to your phone (from 265-010). You can then reply to that message without bothering with specifying a username. This functionality provides a simple means of communication between a cell phone and a computer. To intercept these messages, I decided to make use of my multi-protocol instant messaging client of choice: Pidgin.

Pidgin is a fantastic piece of software. It’s cross-platform (so it works on more than just Windows) and supports many instant messaging protocols (like AIM, Windows Live Messenger, and Yahoo Messenger). Most importantly, though, Pidgin provides developers an interface for creating plug-ins. Unfortunately for someone like me who generally only works in Microsoft Visual Studio when using C/C++, Pidgin is designed to be built using a Linux-style command line environment. Just downloading and installing everything necessary to be able to build a Pidgin plug-in was nightmarish. It took me many hours over several days to finally get everything configured properly. The first step was to install Cygwin and MiniGW. This part was fairly painless, but part of the configuration suggested adding MiniGW to the PATH variable within the .bashrc file in my home directory. Unfortunately, Cygwin did not make a home directory for me and I couldn’t figure out where the home directory should be, so I just ended up adding to the PATH variable in the bash.bashrc file in the /etc directory. The next step was to download all of the dependencies needed to build Pidgin: GTK+, gettext, Libxml2, Perl 5.10, Tcl 8.4.5, GtkSpell, Enchant, Mozilla NSS, SILC Toolkit, Meanwhile, Bonjour SDK, Cyrus SASL, Intltool, and Pidgin’s Crash Reporting Library. Pidgin suggests putting all of these within a subdirectory of the Pidgin development directory, but such a thought offends my sense of organization. After all, these libraries could be used by software other than Pidgin. To compound my issues, the path to my development folders contains a space (bad idea, I know). It was quite a struggle to figure out how to modify the make file settings so that all of the dependencies could be found. What gave me the most trouble was that all of the paths in the make files were given in a relative fashion using a Unix path format (e.g. $(PIDGIN_TREE_TOP)/../win32-dev), but I was supposed to use a Windows path format to point to my dependencies directory (e.g. C:/My\ Documents/Etc/Libraries). I found this especially weird since Cygwin uses Unix paths (it even mounts Windows file systems to /cygdrive/c, cygdrive/d, and so on). After figuring out to use a Windows-style path, I was finally able to build Pidgin.

Building Pidgin wasn’t the end goal, though. It was just a way of preparing my system to build Pidgin plug-ins. With my system properly configured, building a test plug-in was trivial. Next, I turned to actually designing and implementing the thing. I decided to monitor incoming messages for any message beginning with a forward slash (/). Any such message would be interpreted as a command for the computer to execute. I chose to use the common /command arg1 arg2… format as a nice, simple way to encode commands. I also decided to implement a simple permissions system to determine which users could use which commands. (I wouldn’t want random strangers executing commands on my computer!) Additionally, any instant messaging account currently active in the user’s Pidgin would have permission to run any command. This would save me the trouble of configuring a default super user. After I finished implementing and testing the permissions system, I decided I began to tackle the most useful feature of my plug-in: sending a screenshot of the computer to a cell phone. This feature presented two problems. First, as mentioned before, Pidgin is designed to be cross-platform and grabbing an image of a user’s screen is a very platform-specific operation. Second, while AIM supports SMS, it does not support MMS, so I couldn’t send any images that way.

My solution to the first problem was a decision to implement any significant platform-specific operations into separate executables and have the plug-in call them as child processes. I opened up Google and Visual C++ 2010 Express and set about writing a Win32 application to grab an image of the screen and send it to standard output. If you’ve written a Win32 application before, you’ll know that WinMain does not provide you with an argc/argv equivalent (it provides only the command line as single string). Fortunately, Windows provides some global versions or argv and argc in stdlib.h: __argc, __argv, and __wargv (you can also get __targv if you use tchar.h). I decided that I would output the image in either JPEG or PNG format, depending on which format produced the smaller file size. Typically, this will be PNG because it is better suited for compressing images where coloring is not particularly varied. After some searching, I settled upon using libpng and the IJG JPEG Library. By default, both libraries prefer to just write the data directly to a stream, but since I wanted to compare the resulting file sizes (and because I despise using temp files when they’re not necessary), I had to create an internal buffering class that I called ExpandingBuffer. It’s basically like an std::vector of buffers. It’s worth noting that both libraries also make use of C’s setjmp and longjmp functions, a feature I had never seen before. They work a bit like using a goto statement from a function back to some location within itself or within one of its callers.

Once I was satisfied with my screenshot program, it was time to figure out how to send it through e-mail. I toyed with the idea of writing my own implementation of SMTP, but why reinvent the wheel? It would be better to just use a library or program that someone else had already written. While searching, I stumbled across a program called cURL. cURL is a nice program that supports many Internet transfer protocols including SMTP and POP3, the latter of which I intend to use at some later point. I should state, at this point, that Pidgin makes use of GLib, a cross-platform library that implements some common functionality. I was specifically interested in its child process execution functions, the documentation of which I found to be a bit confusing (I’m still not sure if I’m using the functionality properly). At first, I coded my plug-in to execute the screenshot program, which wrote the image to a pipe created by GLib. When that program terminated, I then started cURL, read the data from the screenshot program’s output pipe, formatted the data for e-mailing, and then wrote it to cURL’s input pipe. If you’re familiar with fully buffered streams, you can probably guess what happened: the screenshot program froze. Pipes are a kind of fully buffered stream, which simply means that the entire stream is stored in a buffer somewhere in memory. Once that buffer is full, any writes to that stream will be blocked until enough data is read from the stream to make room for the data that is to be written. This meant that I had to recode my plug-in so that cURL was executed while the screenshot program was still running so I could read the data from the screenshot pipe (and send it to cURL) to make room for more data to be written to the pipe. The documentation for cURL is pretty lackluster. I couldn’t find any documentation on how to tell cURL that I was done sending data to it. I found a post on the Internet somewhere that said I should send the e-mail terminating CRLF.CRLF, but that didn’t work. After a while, I figured out that all I needed to do was close the pipe. It seems obvious now, but at the time, it was a frustrating problem.

Now that I had all of the code in place, it was time to test out my plug-in. I sent a command from my phone to take a screenshot of my computer and send it back to the phone. I could see cURL open up, but it didn’t seem like it did anything. It almost seemed like it crashed. After some investigating, I figured out that Avast’s E-mail Shield was causing cURL to fail, so I disabled that and retried. Again, cURL opened up but this time it appeared to do something. However, my phone failed to receive the screenshot. After some digging, I found that cURL had a command line option to output trace information to a file. That revealed that cURL was unable to connect to the remote SMTP server. I tried to connect to the server myself using HyperTerminal but that also failed. I checked my firewall and modem settings. Neither was set to block cURL or port 25 (the SMTP port). Eventually, I began to suspect that my ISP (AT&T) might be blocking the port to prevent spammers from using it. I did some research online and found that AT&T does indeed block the port but that they can unblock it for you upon request. A blog posting I found detailed how AT&T wanted to charge the author money to unblock port 25 and how it was a pain for him to finally get it unblocked for free, so I had a feeling I was going to have some issues getting the port unblocked as well. I was right. The first few people I talked to at AT&T confirmed that they block port 25 and that I’d have to contact they’re premium technical support and pay them either $15 a month for a year or about $50 one time to have the port unblocked. I’m pretty sure that’s actually illegal, but I’m just a layman (FCC’s Open Internet Rules). I politely informed the AT&T representative that I thought it was wrong of them to charge money to people for AT&T to stop blocking something they shouldn’t be blocking in the first place. I asked if I could be forwarded to someone so that I could make a complaint. After being forwarded around to a few different people at AT&T, none of whom could or would take my complaint, I ended up talking to a technical support person who informed me that he could indeed unblock the port for me at no charge. Finally! It only took two hours on the phone. I once again went and tested my plug-in. Nothing. I checked cURL’s trace file. It appeared as though the e-mail had successfully been sent. To verify, I had my plug-in send the e-mail to a regular e-mail account instead of to my phone. That worked fine, except that my e-mail ended up the spam folder. I was wondering now if my cell phone provider (Verizon) was filtering my e-mails. I contacted someone at Verizon and I was informed that they don’t do spam filtering except by specific addresses/servers. My address and server were clearly not spamming anyone, so it seemed like I had a problem somewhere else. I tidied up my code a bit, converting all sent LFs to CRLFs (as per the SMTP protocol), but that had no impact. I thought that perhaps Verizon was actually filtering my e-mails because cURL was identifying my computer by its local name (rather than by some domain name). As stated before, cURL has very poor documentation. I couldn’t find any information on how to change how cURL identifies my computer to the SMTP server. Fortunately, cURL is open source, so I spent a few hours searching through the source until I figured out how to do it (for anyone with a similar problem, it gets tacked onto the URL like “smtp://”). After some tinkering, I discovered that the Verizon SMTP server seems to discard e-mails if the identified domain name does not match both the e-mail address of the sender and the IP address of your machine. Luckily, I have a domain name that points to my machine, so once I started using that, everything worked! I now had a Pidgin plug-in that could send screenshots to my phone. However, there’s still quite a bit more work to do before the plug-in will be of much use.

Thursday, July 5, 2012

My Love of Games

I love games, board games, sports, and, most of all, video games. It’s the idea of making games that first drew my interest in computing. Way back in middle school, my dad let me put the old Adam computer in my bed room. I’m not sure why I wanted it, but I did. Unfortunately, we had no games for it, so it seemed it was only useful as a word processor, which is very boring for a kid. However, my dad had this book that contained games for the Adam. Wait. A book with computer games? How was such a thing possible? My dad loaded up the BASIC cassette and showed me how to copy the text from the book into BASIC. I was intrigued. I don’t think I ever bothered copying one of those games, but I did experiment with my own programs. I never made anything of note on the Adam before I ruined the BASIC cassette by accidentally putting it into the broken drive. I was upset until I learned that our DOS machine in the basement could do the same thing. I spent hours and hours toiling away at a simple text adventure game that you can still find on my web site. I have never lost my desire to make games. I still do want to make a real game, one that people really enjoy. The closest I came was with my final senior design project in college. My group designed and created a game based on the Mega Man series. Our group page is still available on my old college’s server. Years have passed since then without my even working on a game. I have ideas in my head, for sure, but I don’t have anything concrete enough to actually start making a game. A week ago, a friend of mine from college (one of the people in my senior design group) contacted me. After some talking, we’ve decided to try to make a game together in Flash. We’re taking things slow but I think we’ll actually be able to make a full game if we can stick with it. Slow and steady wins the race.

Unfortunately, this reminds me all too well of why I left the Clickteam community all those years ago. I’ve just moved on and so has the community. We’re simply not a good fit anymore. Oh well, all good things must come to an end.

Sunday, June 24, 2012

More Multimedia Fusion and the Perils of Unfinished Updates

After completing my updates to the Blowfish object, I wasn’t thoroughly convinced that it was perfect.  I thought it could benefit from some testing by people who didn’t make the thing.  Often developers don’t accurately foresee all of the ways users will try to use their software, so it can be difficult to account for all possible situations.  I was advised by Clickteam to post the updated object on one of their forums so that others could test it out.  Now, I wasn’t expecting a large response to my update, but I also wasn’t expecting no response to it.  As of writing this, my post has only 12 views and no replies.  The only comment I received was in Clickteam’s chat room where a user informed me that he couldn’t believe I had updated the Blowfish object because it was so old.  He has a point.  The object is years old and may have been replaced by something better in my time away, or perhaps it has simply fallen into obscurity.  I probably should have verified its use within the community before making the major updates that I did.  Oh well.  Maybe there are some people who will find it useful should they ever realize it’s been updated.

Clickteam has also asked me if I could port some of my extensions to their other platforms.  Specifically, they asked for the Expression Evaluator, Boolean, and Associative Array objects.  The request for ports of the Associative Array object didn’t surprise me since it’s my most popular extension.  The other two objects did surprise me.  I believe the Boolean object was my first extension.  It has some neat features but was awkward to use due to restrictions in MMF’s event list architecture.  I wasn’t aware that anyone used it to be honest, especially not enough people to warrant a request from Clickteam for it to be ported.  The Expression Evaluator object is a pretty neat extension.  It allows for the execution of mathematical expressions and allows the user to create their own custom functions for use in those expressions.  This was probably my favorite extension, but I wasn’t aware of anyone making much use of it.  I want to at least attempt to port these extensions, but the list of new platforms seems a bit overwhelming.  There are now SDKs for Flash (ActionScript3), iOS (Objective-C), Java, XNA (C#), and HTML5 (Javascript).  I’m pretty familiar with ActionScript3, C#, and Javascript.  I’ve used Java in the past but never extensively.  I’ve never used Objective-C, but I probably won’t bother with iOS because that’s Apple’s platform and I despise Apple.  The other platforms seem reasonable, though.

I decided that I would port the Associative Array object first since most of these languages support such structures natively.  When I started, I noticed that my current source code for the Associative Array object was in an intermediate state.  Great.  Five years ago, I was in the process of some sort of update because I had variables that I initialized but never defined or used and I had a function header with a comma after the last parameter as if I was about to add a new parameter.  I’m going to have to spend some time analyzing my code so that I can make sure everything is working properly.  I’m not looking forward to this, but it’s a necessary step along the path I am about to take.  In the future, I suppose I should make a to-do list part of each of my projects in case I get sidetracked for a few years

Monday, June 18, 2012

Multimedia Fusion and Blowfish

Multimedia Fusion is a great program made by a company called Clickteam. If you’ve ever heard of Klik ‘n’ Play, Multimedia Fusion (MMF) is the latest version of that. I started using Klik ‘n’ Play way back in the late 90s (probably 1998). I made some pretty bad games with it, which are still available for download on my web page. About a year later, I purchased my first copy of MMF and discovered that the makers of the software had their own web site and community. I never made any particularly good software with MMF, which is not to say MMF is incapable of good software. It is. In fact, several people I know have made successful commercial ventures with games they made in MMF. I really found my niche in the community when I started learning C++ the summer before my senior year in high school. Suddenly, I could make extensions for MMF, adding new functionality to the program. I may not have been much of a game maker, but I wasn’t too bad with making useful extensions. Eventually, I developed a relationship with some members of the company. In fact, they’ve sent me a few free copies of their software over the years. Alas, over time, I grew apart from MMF and the community. Throughout college, I focussed more and more on traditional programming paradigms and less on MMF. I preferred the more flowing structure of languages like C++ than the event list structure of MMF.

Eventually, I lost all contact with the community. This is something I regret quite a bit. Occasionally, I would get e-mails from people asking about my extensions but not too often. Back in 2008, I was made aware of a bug in one of my extensions, the Blowfish encryption object. This is the one extension that I had actually sold to Clickteam, so when I decided that I would try to fix the bug, I had to get a copy of the source from them. I actually found and fixed the bug. It turns out that the Blowfish algorithm expects data in big-endian form (I’ll discuss this more later), so I just needed to reverse the byte ordering before and after encryption. I did this, but then I got an even bigger idea in my head. It seems silly that each object that wants to use encryption should have to implement the algorithm itself. What if there were a generalized encryption system where any object that wanted to encrypt data could simply be passed the information of an encryption object and could then make use of the encryption object’s own algorithms? Well, I set about working on that model and got in over my head. After a while, I gave up and never returned the fixed source code to Clickteam. I was in a time in my life where motivation is difficult to find. Years passed. I’m finally now trying to dig myself out of this rut. In fact, this blog is one of the ways I’m trying to do it. I figure that the more I tell people what I’m doing, the more likely I am to follow through with it. Granted, no one reads this blog, but maybe, some day, someone will. I consider not correcting the bug in the Blowfish object to be a significant failing on my part, so finally fixing it is of great importance to my sense of self-wroth.

The bug in question, as mentioned earlier, had to do with the byte ordering of data sent to the encryption/decryption functions. If you don’t plan to use any encrypted data with any other implementations of Blowfish, this isn’t actually a problem, but it is likely that you’d want to do that. When passing data (stored as an array of characters) to the Blowfish functions, I would just typecast addresses into the array as pointers to unsigned longs. In little-endian formats, this puts the first character as the lowest byte of the newly formed long. It seems that Blowfish implementations prefer that this first character be in the highest byte of the long. I’m not entirely sure if this means that Blowfish implementations prefer big-endianness or if it’s simply that they convert the array of characters using bit-shifts and ors rather than simple typecasts. I wasn’t satisfied, however, with just fixing this bug. I wanted to add some different cipher block modes. In the past, I had only used the electronic codebook mode (ECB) because I simply wasn’t aware of the other modes in use. At first, I was only going to add cipher-block chaining mode (CBC), but after further investigation, I realized that cipher feedback mode (CFB) and output feedback mode (OFB) were relatively simple to implement. I decided to add all three new modes to the object. I’m not sure what the relative advantages and disadvantages are of each mode, but since they were so simple to add, I decided to just add them and let the user sort it out.

I also wanted to rewrite the file-handling function. In my younger days, I seemed to have a strange dislike of keeping files open, so I would read entire files into memory and then process the data. Once done, I would reopen the file and write the new data. For small files, this is fine, but for large files, this can be problematic. In rewriting the function, I had to open the file in both read and write modes. I had never used a file in this manner before and was unaware of one of the quirks involved. In my first tests, I was able to read and write my first block of data, but I was unable to read subsequent blocks. My first test took a little 1.82 MB file and turned it into a gigantic 64 MB file because I would keep reading and writing the same block of data over and over. After some research, I discovered than when switching between using fread and fwrite, you need to have a call to fseek. Before writing data back to the file, I would call fseek to jump back to the start of the block, but since the write operation placed me at the location in the file of the next block of data to read, I had no reason to call fseek again. Because an fseek call is required before switching back to fread, I had to add a dummy call to fseek that doesn’t change my position in the file. After fixing that problem, my updated file handling worked perfectly.

I took me nearly a week to make and debug all of the changes that I wanted, but I feel it was worth it to remove this blemish from my psyche. I’ve returned the updated object to Clickteam to see what they think of the update object.

Wednesday, June 6, 2012

User Interfaces and my Secret Project

I don’t make it a secret that I despise designing and implementing user interfaces.  One of the beauties of the command line interface is that it is so simple to implement.  Unfortunately, unless you’re using the application programmatically (e.g. in batch files), the command line interface isn’t very user-friendly.  It’s simple to use but, in a Windows environment, opening a console, navigating to the proper directory, and then typing out the command line can be cumbersome.

Ultimately, you need to design your user interface for your intended users.  For more esoteric programs like my FLV Script Data Extractor, a command line interface is fine because anyone interested in that sort of thing is likely to be handy with the Windows console.  However, for my secret project, my intended users are not necessarily going to be hardcore computer users.  They’re much more likely to be casual computer users who may never have used a console application or a command line interface before.  Unfortunately, that means that I had to design a graphical user interface for my secret project.  I considered using .NET to make the interface because Visual Studio 2010 makes creating GUIs in .NET very easy, but I thought the overhead of .NET for such a simple application was overkill.  To my dismay, Visual C++ 2010 Express does not provide any interfaces or templates for creating GUIs in Win32.  You just have to code the whole thing.  I’m not familiar with making GUIs in Win32 so I was kind of stumbling all over myself at first.  I tried using Windows controls (static text, edit boxes, etc.) but I couldn’t get the coloring right without doing extra work that I didn’t feel like doing.  After banging my head against the wall for several hours, I decided to try using a dialog box interface.  I’ve done my fair share of dialog box programming from my time making objects for Multimedia Fusion.  I wasn’t sure how to make a Win32 program that was just a dialog box, so I had to play around with it.  Finally, I realized I could just scrap all of the default window creation code Visual C++ created for me and simply use the DialogBox function by itself.  A handy trick I’ve used in the past is to create my dialogs in Visual C++ 6.0, which has a neat little dialog box designer, and then copy the proper code over to Visual C++ 2010.  I suppose I could just draw the interface on paper and then code it manually but designing it visually is so much faster.  After several days, I finally finished the interface.  The code is messy, with way more global variables than I’d like, but I’ll just blame that on a lack of experience. 

I toyed with the idea of also allowing the user to use the command line if they so desired, but I ran into a bit of a speed bump.  In a Win32 application, Windows does not break up the command line into argv and argc.  Instead, it passes the whole command line as a single string.  Windows does provide a function (CommandLineToArgvW) for converting this string into something similar to argv and argc but it only works on wide chars (Unicode).  This really shouldn’t be a problem since I did design the program to use Unicode, but I was using TCHAR, which is a Windows type that can be either wide char or char depending on preprocessor definitions. I thought about using CommandLineToArgvW but that would defeat the whole point of using TCHAR.  Given that no one is ever likely to use the command line, I decided it wasn’t worth the effort to write my own function to parse the command line.