Monday, December 14, 2009

A Good Programmer?

I came across this blog post via Hacker News tonight, and it gave me a little food for thought.

I should know better, of course, than to just take stuff like this as gospel truth, but I hear it a lot from people like Jeff Atwood, who make their living talking about programming. To be a "good" programmer you need to be the kind of person who just loves it, and does it all the time.

The second one I don't know that I agreed with too much, mostly because I know plenty of people who talk up "bleeding-edge" technologies who are only talking them up because they're bleeding-edge, and couldn't even begin to actually program in it if they wanted to because they lack even the most basic skills. This is primarily what I run into with kids singing the praises of the latest Microsoft technology (not to piss on Microsoft technology necessarily, but there's a reason for that). However, taken with the rest of the list, it's a little more understandable. I'll still hold on to my dreams of kernel hacking, though. ;)

The one that really hit me in a tender area was the last one..."If your potential programmer didn’t do any programming before university, and all his experience starts when she got her first job, she’s probably not a good programmer." Ouch. That describes me almost to a T. Granted, I started programming in my undergrad career while pursing another degree, and the Master's was technically an extension of a "hobby", but before that I had never done any programming. In CS 120 I had to go to my professor's office for help because I didn't know what FTP was. Yes, it was that bad.

I had no access to any resources to even begin to understand how to do it, and didn't know what to look for anyway. It has been the primary source of my low self-confidence in my programming ability the entire time I have been attempting to make the computer bend to my feeble will. Even now, when I know I've improved so much, I still never feel like I've worked hard enough or dedicated myself enough to improving my skill. I've tinkered with a wide variety of languages but am still very much a C++/Java person.

Anyway, expressing my insecurity is not particularly helpful...I'm off to start reading more books and working on more projects.

Saturday, December 12, 2009

Two Changes!

I've moved my portfolio page to CodeMonkeyInc on Google Sites because, frankly, I am fully capable of writing a website backend, but I couldn't design my way out of an empty pool. Not to say I haven't tried really hard, but I lack the requisite skills in terms of creating backgrounds and other important images that look clean and professional, rather than like I made them up in Gimp after dicking around for a half hour. Also, I have yet to actually BUY hosting, so my iweb account will be going down after I graduate anyway.

The other thing I did this weekend was post a few of my person projects to Launchpad so I could show them off on the portfolio page. Right now my projects on there aren't extremely impressive, but ScribbleMidi is coming along really well, and I'm anticipating having a semi-working system soon. Launchpad is a wonderful, free way to publicly post your open source projects.

Sunday, December 6, 2009

Battle With the PHP Script From Hell

In seminar class we have to write a script that takes in a gigantic (3.2MB) text file data dump, parse the data out, and insert it into a MySQL database for an application we're working on. The first pass was done by a classmate, and although it got the job done quickly (averaged around 13 seconds), the other requirement was that the script be easy to modify by non-programmers, and this was not even remotely easy to modify (it took me a half an hour to add one line). So I took it upon myself to rewrite the whole thing, and so far the result has been an interesting exercise.

Problem 1: The data is delineated by XML-style tags, but is not in an XML structure.
Problem 2: Some of the records (collections of data that represent one art piece) are invalid, as they just describe different image file names for a single art piece.
Problem 3: Some of the data is unique (such as the style, technique, etc), and the values are often in a list separated by semicolons.
Problem 4: The current version of my rewrite takes well over 400 seconds to run.
Problem 5: I had pretty much a weekend to write this.

The data dump and the way the records are structured is unavoidable. I approached the problem by reading in one record at a time and passing it through a series of functions to pull out the appropriate values, then inserting them into the MySQL database. It does this one query at a time, however, which I suspect is part of the problem.

The first step in improving is exploring the REPLACE function. I'm currently running a query that checks a table if the current values exists, otherwise it needs to be added in. Making these required entries unique should remove the need for these extra queries.

The result? Down to around 330 seconds, not as bad. The primary keys are a little screwed up, as expected, but since it's an auto-incremented number, it isn't a huge deal.

At this point the primary bottleneck is in the bridge tables. Here's how this works: all the bridge tables simply connect an art piece with its corresponding style, technique, etc. So there's a style table, which is only a list of styles, but we need to take the artID (one select query), then select the corresponding styleID, and put them in one table. This wouldn't be so bad except it's 2 queries in a row for each of the tables; that's a lot of individual queries.

**Note: at this point I realized I made an extremely stupid error and kept adding onto the records array rather than clearing it after each record was processed *facepalm!*
Fixing that major leak got the script down to 131 seconds.

After making a huge difference with the array I managed to cut it down even more by fixing the art table creation. This function was using two different queries to build the table, which was unnecessary. It's now running at around 16 seconds!

Right now I'm pretty happy with where the script is at, so I'll save the optimization of the bridge tables for later.

Wednesday, November 18, 2009

Linux in Media

I know for a fact that a lot of Sci Fi original movies use Linux in their computer scenes (one of them was really, really, obviously Gnome desktop, which made me extremely happy). it's free, doesn't violate any copyrights, and allows the movie makers to customise the look of it so it can look suprar scientific. I wonder how much media actually uses Linux for their "fancy tech computer" scenes...or if most of them just use a flash movie or what...

Thursday, November 12, 2009

World Usability Day

I went to World Usability Day down at IU and was really impressed, that was a fantastic conference. Very, very, very small, but gave me a lot to think about and didn't put me in a grouchy mood. No one there expect myself, Dr. Gestwicki, and Austin were programmers, which made us feel like outsiders in a sense, but we all still gained a lot by going and enjoyed ourselves quite a bit.

The highlight of the conference was definitely Rod Collier, the guy who designed the Letterman Building here at Ball State. His presentation was both informative and interesting, his PowerPoint was amazing, and he gave some fantastic examples of innovative design in his own home (which he designed himself!).

Unfortunately I didn't get a good sense of what everyone was thinking of when they were talking about design and usability...I guess it was just physical objects...but most everyone there avoided the topic of computers like the plague (even the guy who worked for Tuitive, which designs web-based apps and webpages for clients). This was unfortunate, since CS could use more good usability people.

Another refreshing aspect was the attitude; everyone there obviously knew what they were talking about, but didn't seem to be wallowing in their own sense of self-importance, which was extremely refreshing...I felt like this was due to the fact that these people are professionals, working for real clients, rather than a group of artists, which really makes an enormous difference.

It also encouraged me to get a better design sense...I still have a lot of work to do in that regard. Especially since one of my interests is web design, this will be an essential skill. Unfortunately I'll always be a struggling outsider, because I don't fool myself into believing for a minute that design is something people can just "pick up". The amount of studies done on usability, the ridiculous amount of unusable systems, and the amount of money companies will spend on design are all obvious proof that design is another "this isn't as easy at it looks" area, but on the plus side, I'm far more aware of it now than I ever was before.

Now, if you'll excuse me, I'm off to redesign my website again :)

Surface Project Success!!!

I meant to post this WAY earlier, but I've been unbelievably busy. The Surface project was a great success! It didn't crash, people seemed reasonably interested in it, and we made some very interesting observations.

Primary observations of interest:
1) People didn't seem aware of what the navagation bar was. They would mess around with the cards already on the table rather than interacting with the navigation bar.
2) They kept trying to resize the cards (totally understandable)...we'll need to build in a flexible resize function for all the UI elements if there is any continued work on this project.
3) They "accidentally" discovered the flip function. Again, there needed to be an obvious visual cue for this.
4) They kept trying to interact with their own names, which was, in retrospect, a completely obvious interaction we neglected to take advantage of due to time constraints.

We're going to start going through the data soon, which will also be extremely interesting. More to come!

Thursday, November 5, 2009

Brief UI Post

I installed TweetDeck today just to check it out:

Oh. My. God. Why. This is easily the worst UI design I've seen in a while. What do all those little icons do? I have no idea until I hover over them with the mouse. Why so much noise? I can't even tell what I'm freaking looking at.

And don't get me started on the Growl integration:

Yeah, that's attractive...takes up a ton of screen real estate, makes an annoying sound, and doesn't conform to my Growl theme (quick toaster pop-up along the bottom).

I hate, hate, hate cluttered UI designs that take up more space than they deserve. If you can fit all your content into a thin column, you make the application the size of said column. If you need more of these content windows, there's this concept call tabs. Nothing that only takes 140 characters to display should EVER take up my entire desktop real estate.

To be fair, I'm a stickler for clean desktops. As should be obvious:

So I'm definitely biased. But I have trouble viewing something like that as usable. Need to do more research on this topic :)

Thursday, October 29, 2009

Screenshot of the Latest Build

I've been running into a lot of challenges with the layout, but it's starting to look really nice.

One weird thing about the controls is that they will always overlap each other...not the best behavior, so I need to either look into Grid layouts or see if there's just a "Overlap = false" or something.

Wednesday, October 28, 2009


Another project I dug up today while I was doing a Qt tutorial for my Open Source class, a small Brainfuck interpreter with a slick GUI. Qt is a fantastic graphics toolkit, and although I love Gnome, I'll probably end up doing other GUI projects in Qt...haven't bothered to take the time to learn GTK+ yet afterall. :)

Clever Qrulrs Code

I'm proud of this, so I thought I'd share's very simple, just calls out to two different web services to create a QR Code from a URL:

public Qrurls()


private void createBitlyURL(String longURL)
String URL = "" + longURL + "&login=" + bitlyAPIUsername + "&apiKey=" + bitlyAPIKey;
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(URL);
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
Stream resStreamLocal = response.GetResponseStream();
string tempString = null;

if (null != response)
resStreamLocal = response.GetResponseStream();
StreamReader sr = new StreamReader(resStreamLocal);
string str;
while ((str = sr.ReadLine()) != null)
if (str.Contains("shortUrl"))
tempString = str;
throw new Exception("Something is wrong!");

if (tempString != null)
bitlyURL = tempString.Substring(tempString.IndexOf(""), 20);

private void createQRCode()
String URL = "" + bitlyURL;

WebClient wc = new WebClient();
byte[] data =

PngBitmapDecoder decoder = new PngBitmapDecoder(new MemoryStream(data), BitmapCreateOptions.PreservePixelFormat, BitmapCacheOption.Default);
BitmapSource bitmapSource = decoder.Frames[0];

// Draw the Image
myImage = new System.Windows.Controls.Image();
myImage.Source = bitmapSource;
myImage.Stretch = Stretch.None;


It's being used in our Surface application for the iDMAa 2009 Conference. Obviously not the most complex code I've written by a long shot, but it didn't take much time and was extremely useful, since QR Codes are getting more common.

Monday, October 26, 2009

Just so you can all feel my pain...

My Surface project is becoming littered with code that looks like this:

presContent.Background = System.Windows.Media.Brushes.Transparent;
presContent.Foreground = System.Windows.Media.Brushes.White;

and this:

System.Windows.Controls.Image authorImage = new System.Windows.Controls.Image();

I spent around 10+ minutes just now making these "ambiguous references" unambiguous in this manner. There are a few major issues with this, besides the waste of time.

1) The "Intellisense" that everyone jizzes themselves over with Visual Studio is not very intelligent.

2) Why can't it tell that, when the variable is a System.whatever.fuckmicrosoft, that the constructor should also be of this type. What's the point of even having static typing if it can't tell that.

It's obvious these tools were very poorly designed with very little forethought...I shudder to think this was done with full knowledge and not on accident, but it's almost too prolific to not be otherwise. I only complain because it's making my life very very hard. This project has gotten rather large and the last thing I need to worry about is the jackass compiler complaining it can't tell which 10 word namespace this data type is actually in, and could I please write out all 10 words, which makes the "using" keyword ESPECIALLY useful.

In other news, we're at over 300 commits on the project, with 2 weeks left before shipping! :D

Tuesday, October 20, 2009

Struggles in Seminar

I'm taking a 600-level Art seminar class, mainly because I have experience with the Surface, and my boss is in charge of it, and the professor I'm doing my other Surface project with is teaching it. We're in week 9 and we haven't really started coding yet...this makes me extremely nervous, since I'm taking 2 other classes on top of doing a 699. The crux of our problems have been this stupid GUI do you design a cool, interactive GUI for a piece of hardware that is relatively new, and all the other applications for it are photo viewers, for an art museum that doesn't just want a photo viewer, because the photos would just be showing the stuff they have hanging on their walls? Answer: very carefully (ba-boom-ching!)

But seriously, this Surface may be intended for "collaboration", but the reality is far from that. The problem of designing a deep, interesting interface for what essentially amounts to a huge table that sees your fingers is no trivial task...and it hasn't been studied for years like a traditional GUI. It's really difficult to communicate the nuances, but here goes:

1) The Surface is both smaller than you'd expect, has less RAM than you'd like, and is rectangle, which, not matter how collaborative your app is, will force certain people into a driving position and others into a watching position. The Surface can only comfortably be surrounded by, maybe, 6 people total, but that's not nearly enough room for everyone to have their own "interaction space". Plus, it gets really cluttered REALLY REALLY fast.

2) You have two options: design the GUI in a way that allows for each person to have their own, isolated interactions, or somehow try to design it in a way that everyone can interact together. Each approach has it's own problem; the isolated interactions can be much deeper, but it's not collaborative, while the collaborative design can only be so deep, otherwise one person can ruin it for everyone.

Honestly, I would just love to see a design that can have the cake and eat it both deeply immersive for everyone involved while still being collaborative. I don't know if this is just due to my own brain being used to a specific way of doing things, but I feel like this is an impossible task for computers...they are necessarily a single-person experience. The fantastic thing about online games is that they can simulate a deep, collaborative experience, but you're still interacting with it through a single-user machine.

This brings up an interesting point many multi-player games, collaboration is necessary to complete a task. Sure, two people aren't controlling the same character, but their collaboration facilitates the common goal...that collaboration is always extremely deep and nuanced. Unfortunately, I can't figure out a great way to apply this to the Surface without involving multiple terminals attached to it, or by making a ridiculously complex game. This thought has really intrigued me, though, so I may follow up later.

Wednesday, October 14, 2009

CCSC:MidWIC and Thoughts on Women in Computing

This past weekend I presented a poster at MidWIC, the Midwestern Women in Computing Conference. This was both an extremely frustrating and fairly interesting experience, and got me pondering the whole "women in computing" issue. I felt like the conference not only missed the point of why women are such a minority in CS, but actually brilliantly illustrated the problem: we were placed in a small, cramped building away from the main building with all the food and keynotes (where CCSC was taking place), presented posters more or less only to each other, were given lipstick in the swag bags, and had a social dance at the end of the first day...the organizers were obviously under the impression we were all 9 years old.

However, that brings up an interesting issue. Women are most certainly a minority in Computer Science, but how do you not only encourage more women to join, but also promote them in a way that doesn't come off as condescending? So much of this is a major cultural problem that isn't going to be solved overnight. And by intentionally trying to promote one group you're doing the very thing that no one really wants: singling out a group of people based on their gender rather than their common interest in computer science, and thereby creating a very delicate dilemma. I want women to be promoted, in a sense, but only in the sense that, if I were to go to a general computer science conference, where everyone is attending due to nothing more than a deep interest in computer science, I will be taken as seriously as everyone else and I'll find as many women being taken seriously as men. So at that point I'm not even being promoted as a woman anymore, just as someone who loves computer science and has some knowledge about it, and therein is the catch-22.

However, that leads to another question...should we even bother promoting women in CS to each other? We're already women in CS, it's too late to make us be any more "into" CS. The challenge is to get teenaged girls interested in Computer Science, and plant the idea in their heads to check it out when they get to college. That requires mentoring programs (as well as a massive cultural overhaul, but again, that will take years at best) where women not only show their younger counterparts that they can do CS, but that they can be as good at it as anyone else. This is a challenge, especially for less well-funded schools, but it's a massive step in the right direction.

None of this is original, I'm sure, but if nothing else it helped illustrate in my own mind the problems being faced right now.

Saturday, July 25, 2009

Slackware Day 2: Configuration and New Packages

So now I had a fresh Slackware install on my Eee...what next? The first task I accomplished was fixing a problem with the tutorial I followed: it only had you install packages from the first install CD, but I wanted packages from ALL the install CDs. Fortunately, this is easy sauce.

Go to any of the FTP Slackware mirrors and navigate to the slackware folder, which has all the software packages. Since the ones I had installed from the ISOs needed to be updated anyway, I just copied all of them into my local slackware folder. I then mounted the hard drive on my Slack system, and navigated to the package collection I wanted to install, and ran:

#installpkg *

This will install everything in the folder.

Important note: You'll notice MOST of the packages have the extension .txz. If you have an older version of pkgtools, Slackware won't know what to do with these. A good way to handle this is to download the .tgz packages of tar, gzip, pkgtool, and xz, install those using installpkg, and then install everything else.

Finally, not only do we have a great base system, but X is installed! I chose to not install the KDE or KDEI packages, my window manger of choice is xmonad, so I'll go through the steps to install that. :) However, if you installed the XAP packages, you should have at least fluxbox, blackbox, xfce4, and a few others, so you're more or less in business!

Before configuring X, make sure to add a user. All the previous actions needed to be done as root, but X has lots of user-specific configuration, so if you haven't done that already, get it out of the way.

Finally, to configure X:


That should be it! To choose your initial window manager, run:


Running startx should drop you into whatever you chose! Further, more fine-tuned X configuration is usually done in the /etc/X11/xorg.conf; configuration for your personal X session (as a user) is done in .xinitrc.

Bonus task: Installing Xmonad
Xmonad is my favorite wm, especially on my netbook. It's fast, clean, minimal, and relies very heavily on the keyboard. Another good one is ratpoison, but I use xmonad for my day-to-day :)

To install, go to the Xmonad site, downloads, and slackware. All the packages you need to install xmonad on your system are right there (there's not many of them!). Download them all into a folder (I called mine xmonad), navigate into the folder, and run

#installpkg *

After that has finished, open up .xinitrc (it should have been created after you ran xwmsetup).
Where it says:

exec /usr/bin/startflux

or something similar, replace it with:

exec xmonad

Done! Exit out, start X again, and enjoy Xmonad :)

That's pretty much the extent of my Slackware-installation posts, I've managed to get the full system running quite a bit faster than my first time. Any subsequent posts will more than likely deal with getting wireless up and running, though compared to the first time I tried, it should be significantly easier...the 2.6 kernel has the best wireless drivers built-in already! I hope people find this helpful, or even a little interesting...thanks for reading!

Friday, July 24, 2009

Slackware Day 1: Installation

Of course, the first step before doing ANYTHING is to back up my laptop...don't want to lose anything important!

While waiting for the files to back up, I needed to create a boot disk on my USB (Eee's don't have a disk drive :P). According to the step-by-step I referenced in the last post, I needed to perform a dd to get the usbboot.img I downloaded from the Slackware site onto my flash drive.

I got the device ID for the flash drive by running the command:

sudo fdisk -l

This is crucial...the last time I assumed I knew what the device ID was, I borked my entire system (side note: if you find yourself thinking "this is taking a really long time", you borked your system). Fdisk says that my USB is /dev/sdb1, so my dd command would be:

dd if=usbboot.img of=/dev/sdb1 bs=512

That resulted in:

54504+0 records in
54504+0 records out
27906048 bytes (28 MB) copied, 1.26218 s, 22.1 MB/s


Important edit: this will only install the very basic packages. My next post discusses how to get *all* the packages later, including updates, but at this step it would be easier to not bother with the ISOs at all, and go to any of the ftp sites offering slack packages. There should be a slackware folder in slackware-current with all the package groups; download them into your slackware folder. This way, you get everything, and it's all up-to-date.
Next step: getting an ISO of the install disk 1, mounting, and copying the slackware/ directory to another USB drive. I installed gmountiso (recommended by The Ubuntu Geek, a fantastic Ubuntu help-page), mounted the ISO, and copied the /slackware folder (with all the packages) onto an external hard drive.

Finally, booting and installing! Following the instructions on the website, I managed to get the system and packages installed without a hitch. I didn't manage to get LILO installed immediately, but I was able to boot successfully into my new system using the following commands in the GRUB prompt:

grub> root (hd0,0)
grub> kernel (hd0,0)/boot/vmlinuz root=/dev/hda1
grub> boot

And now for the real work, next time: configuration and installing new packages!

Wednesday, July 22, 2009

And Moving Back to Slackware...

I change Linux distros more often than I change underware. It's always for different reasons, but the result is always the same...I get to spend time setting up a brand new system, working out all the little hiccups, until it's running smooth as silk. Today is Part 1 of my decent into madness: The Migration Back To Slackware.

Slackware was my very first Linux distro. Before I even really understood was Linux even was. I ruined about 10 CDs trying to get the iso images to burn correctly because I was using Windows, and that shit is hard! Finally...Slackware was mine. I spent the next few years or so just learning how to manipulate the most basic functionality, culminating in my finest achievement, learning how to successfully compile my own kernel. But really, the furthest I got was building a usable desktop environment. Sure, this is a success in and of itself, but it left huge gaps in my knowledge that other, gentler Linux distros slowly filled. When my friend recommended Arch to me, I was very excited, since it's basically Slackware with package management. Package management eventually became my friend after I worked with Debian (which probably directly influenced my move to Ubuntu as my primary desktop).

Recently, though, I've been dreaming of Slackware again. I think it's finally time to come back to my first toy distro, and see how much I've REALLY learned after all these years.

Resources to start:
How To Install Slackware On The Eee
The Main Slackware Page

Saturday, June 13, 2009

Migration to Arch

The other day I did a dd to my hard drive, and my Crunchbang install got completely wiped out. To the point where the computer refused to boot. The positive side to this story is that I had been thinking about migrating my netbook back over to Arch Linux, and the massive trashing gave me the perfect excuse to do this...

I had used Arch in the past as both a desktop OS and a server OS, after I had moved away from Slackware and before I really understood how nice package managers are. It's lean, mean, lightning fast, and a LOT of work, but I felt like it was more rewarding than working with Slackware (really only because Slackware was my first distro, so I didn't have a very solid understanding of what the hell I was doing). Crunchbang is a beautiful, solid OS to be sure, but I decided it was high time I moved to something even more lightweight for my netbook.

So, I'm currently running Arch Linux with the slick tiling window manager Xmonad (the WM I heard about through my new favorite webcomic, GeekHero :)). It took about 2 days total to get it set up 100% how I want it, and now I have a netbook that boots up and is ready to use in under a minute. Firefox, Evolution, and Audacious are basically the only non-CLI programs I use (I really can't use CLI web-anything), which isn't a fun thing for Linux newbies, but I enjoy quite a makes the mouse completely optional outside of these programs, which in turn makes use much more efficient and significantly less frustrating (relying on the mouse is definitely an exercise in frustration).

The best part about the Eee is the extremely generic chipset for everything, which means that all drivers needed are already built into the 2.6 kernel, no extra configuration needed. My main recommendation is, of course, regarding the wireless management: installing the NetworkManager (the same one used in Crunchbang!) is absolutely worth it. Other small, but extremely useful additions were Xmobar, a wonderful little status bar application designed for integration with Xmonad; trayer, exclusively to display the little nm-applet in the corner; and feh, for desktop wallpapers. This article was extremely helpful in getting this all set up.

Now, if only I could find a good replacement Twitter/ status client...

Friday, May 29, 2009

Tales of C# Programming for Win32 Beta APIs, Part 1

Many, many people ask me what, exactly, my job is. I work for the Centre for Media Design basically doing whatever my supervisor wants me to do, and my current project is to develop an interactive data visualisation graph utilising information from a database of attendees of the IDMAA conference, being held here in August. This project has had a massive learning curve for me due to the fact that it is in a language I'm unfamiliar with (C#), using an API I'm unfamiliar with (Microsoft WPF/Surface) and with little to no documentation (the Microsoft surface is a brand-new piece of hardware and there isn't even a book written about it yet). Fortunately someone who is a significantly better programmer than me is assisting, and so far we've managed to hack out a pretty impressive looking demo.

One of the major issues with the Microsoft API is, predictably, how non-transparent it is. It appears that everything is designed to be tied directly into the XAML (the MS XML junk that describes the objects you're working with), making the C# clunky and difficult to follow at points (where do these freaking event handlers come from? why do you add them the way you do? no one knows). Threading was another delightful adventure into "wtf..." land; apparently, you cannot access an object inside of a thread unless it was created within that thread. Sure, this helps prevent synchronisation problems, but it's like cutting off your arm so you won't be tempted to reach for that last doughnut. Another major annoyance: drag and drop functionality isn't really "built in"...if you want to implement drag and drop, there's a rather long, detailed tutorial on the topic. Which is extremely useful considering the hardware is a touch surface.

Regardless, the project is going extremely well and I'm enjoying the challenge quite a bit. Once the project is finished it should be fairly impressive, if it's done right. Beyond that, the challenge will simply be to make it engaging enough for users, so we don't run into a trend of people getting bored with it almost immediately. That opens up the possibility of designing in some rudimentary game elements, maybe allowing users to "collect" interesting data, or garner points for connections, things of that nature. More on that as the project expands!

Wednesday, May 27, 2009

Things Visual Studio Does That Piss Me Off: Part 1

If you don't declare a variable, it will try to autocomplete everything. For example, while trying to write for(i=0; i!=thing; i++), it will ALWAYS autocomplete with for(if=0; if!=thing; if++), and even if I change the first if back to i, it will continue to insist that I was, in fact, trying to use if for each proceeding i. It recognizes the for loop structure but is not smart enough to realize an if is not appropriate in this situation.

Also, it fucking copies blank space.

Monday, April 13, 2009

Things I learned this weekend (or: the list with no point)

1) "Accessing" an element in C++ lists (and related structures) means "make an exact copy of this for me to use temporarily".

2) Don't hardboot a server randomly (especially out of laziness), even if said server is giving you all sorts of sass. You may knock it to the curb for a second, but it will trip you into oncoming traffic.

3) Port forwarding for fun and profit. Also, default router passwords are for win. Too bad this resulted in the death of a server...

4) Crunchbang skimps on language packs. This would be more acceptable if one of the language packs I had to install wasn't for the most widely spoken language on earth. Ethno-centric much...

Reinstalling Ubuntu server this weekend...Все идет по плану

Thursday, January 29, 2009

Project 2: String Length Counter in C

This weekend I spent some time exploring C strings, and comparing/contrasting them with strings in C++ after doing a project in which we were required to write a word length counter without using the built-in stringlen functions (oh noes!).

Strings in C++ are nothing like strings in C. I had a vague idea of this when I was doing the project, as I have used C++ strings in the past, and they don't require as much work as C strings. There is a very good reason for this: while C++ strings are very nice containers with nice built-in functions, C strings are, quite literally, nothing but a group of chars. This can be expressed either as an array or a pointer. The difference between the two being one is a pre-sized block of memory containing a line of chars, while the other is an address to a block of memory (not necessarily pre-sized) containing a line of chars. Array do decay into pointers when being passed as parameters and such, but the two are still fundamentally different.

A very interesting thing about pointers and arrays is that, despite the fact they are represented completely different in memory, they can still be treated the same within this context. If you declare two variables:

char array[] = "Kitty";
char *pointer = "Cat";

you can use the same notation to access the individual characters in both, ie:

array[1] to get i
pointer[1] to get a

In the first case, the compiler will start at the first character of array and move one in order to get the value. In the second case, the compiler will fetch the pointer value, add 1 to this value, and then finally go to this location to load the character.

This is what, for me, makes C so interesting. It is far more low-level than C++, and as such, the fact that you're accessing values in memory is far more transparent. The malloc command I used in my homework literally sets aside a block in memory of the size indicated (returning a pointer), and keeps that memory allocated until it is either deallocted by free or until the program ends.

C++ string containers are just special templates that allow you to do far less damage accidentally (though it is said you can pretty epically destroy the world if you do mess up). They do a lot more to automatically manage your memory for you. You can convert them into a c_str (which is actually a const *char, and is necessary for a few file input functions such as fstream), but for all intents and purposes they are their very own, very easy to use data structure.

Anyway, I hope to do more C programming this year as a way to improve my knowledge of pointers, and hopefully allow me to start writing some more heavy-duty projects such as small compilers and what-have-you. Let the C adventures begin!

Thursday, January 22, 2009

Project of the Week

I decided that since I've stopped posting regularly (unfortunately!) I'm going to start post about a Project of the Week. Essentially, this will be some project that I accomplish, either over the entire week or just on a weekend. Posting my experiences will both help me remember it, and provide a reference for myself. And maybe someone else will enjoy reading about it as well :)

This week's was easy: set up my Ubuntu server with SSH and an IP address manager daemon that would allow me to admin the server remotely, and elimiate the need (for the most part) for the splitter box I have between my Mint desktop and the server itself. This is mainly for convenience, of course, and the fact that the splitter box often cuts off access to the mouse/keyboard and makes the monitor look yucky.

Of course, installing Ubuntu Server edition (CLI only!!) was a breeze; the installer is far more stripped down (it made me nastolgic for Slackware), but still very straight forward. It may scare the Linux n00b, but anyone who has installed a few Linux distros in their time would be comfortable with it.

So next up, I wanted to be able to SSH into my box from anywhere. This is both for convenience, and as I mentioned, removes the need to directly interact with the computer, which is a pain when it's sharing a monitor with my main Linux desktop (which, just to mention, is Mint :3). In order to accomplish this hardly daring feat, I created an account with, which is a free service and allows you to add your computer as a "host". I can't remember if there is a limit to the number of hosts you can add, but I'm inclined to say there is, since they have account upgrades that you need to pay for.

After doing this I installed ddclient (sudo apt-get install ddclient), which is essentially a daemon that keeps track of when your dynamic IP address changes, and notifies DynDNS of the change. DynDNS then updates you host so, voila, your hostname always resolves to the right computer! During the installation Ubuntu automatically configures ddclient for you, more or less. There is a small amount of hand-configuration, but you need only to change a few lines, and you're off and running!

After doing all this (extremely hard (; ) work, you should be able to directly SSH into your server box and admin from any computer you wish.

The next project I'm considering will be a bit more complicated, and more programming-oriented than server-oriented, so hopefully slightly more interesting :)