MySQL Stored Procedure, NAME_CONST, and Character Sets


For a Project I had helped work on, they recently needed to move servers and in the process upgraded from MySQL 5.0 to MySQL 5.1.41. They ran into some quirks with 5.1 and I thought I would document our work-a-rounds for them.

We were having serious performance problems with a Stored Procedure that would be called by a Windows Client on thousands of machines. Before on 5.0 it would take under 0.02 seconds, and now it was taking 25-35 seconds. In this stored procedure we would call a query something like this:

SELECT INTO spUserId FROM users u WHERE u.username = spUsername;

Where spUsername was a Stored Procedure variable of CHAR(50). So while the stored procedure was running I would execute a SHOW FULL PROCESSLIST; to see what the matter was. I would see the query, but MySQL 5.1 had substituted spUsername with a MySQL function, so it looked like this if the value of spUsername was “sheldon”:

SELECT INTO spUserId FROM users u WHERE u.username = NAME_CONST('userName',_utf8'sheldon' COLLATE 'utf8_unicode_ci');

So when I did an explain (and removed the INTO spUserId since it wasn’t needed to understand the lookup and would throw and error), I saw it was performing a full table scan. The table had near one million rows, so that was a very bad thing. It was completely ignoring the Index on u.username. At first I thought it was because of the NAME_CONST function, that it was doing something weird. To be honest, I was confused as to exactly why it was substituting it in the first place, since the documentation for NAME_CONST didn’t have anything to do with Stored Procedures.

Then I removed “_utf8″ and “COLLATE ‘utf8_unicode_ci’” from the picture. Now I was getting somewhere, since it returned 1 row using the correct index. Now, the table users stored its data in the character set latin1_swedish_ci. So, ultimately, from what I can tell, it appears MySQL was reading the entire table, converting the table’s username field to utf8, instead of converting spUserName to latin1. The reason it thought to use utf8 was because the Windows Client when connecting set it’s session to use utf8 (I think, since I couldn’t quickly look at the source and verify this, but it was the assumption I made).


So I decided to manually cast the variable from UTF8 to Latin1 by using another Stored Procedure variable, executing this command:

SET @tmpUserName = CAST(userName AS CHAR CHARACTER SET latin1);

Then I modified my query to use the new variable:

SELECT INTO spUserId FROM users u WHERE u.username = @tmpUserName;

Bingo, I was back down to 0.0022 seconds for the Stored Procedure to execute, and the new Database server is screaming fast.

Tagged with: , , ,
Posted in Programming

RESTful Web Services & Nirvana


I’ve been hearing, reading, and talking a lot REST, RESTful Web Services, how to implement them, and even how to explain to your wife about it. Then I see or read presentations about what is and what is not RESTful Web Services. I’ve created and consumed quite a few web services, so while I’m not a leading expert in the field, I have a lot of experience with them. So instead of explaining over and over again to different people my thoughts on the subject, I’m writing about them. For those who have a short attention span, I’ll sum up my feelings of the subject in one sentence for you, and then explain in detail for those interested.

Representational State Transfer (REST) [is an] architectural style, for distributed hypermedia systems, [that describes] the software engineering principles guiding REST and the interaction constraints chosen to retain those principles.


My feelings about REST come, almost word for word, from the opening paragraph of Roy Fielding’s dissertation on REST. REST ultimately is a set of principles with some outlined constraints to adhere to those principles. The key point is that it is principle driven. Its not a protocol. Its not a pattern. Its not a specification. REST is about fundamental principles. So that is why web services that implement REST are RESTful Web Services, not REST Web Services; and why they don’t call SOAP Web Services SOAPful.

Nirvana, The Concept

Now I’m not talking about something that smells like team spirit, but the concept and philosophy of Nirvana. I think it is good example, since it is described as being in a “perfect peace of the state of mind that is free from craving, anger, and other afflicting states.” It is this goal that millions of people living on this earth strive for, and spend life times trying to completely achieve.*

As Web Developers, we can decide if the benefits of using REST (which many people don’t even bring up when debating about REST) are worth the cost of following it’s constraints and principles. There might be projects where SOAP, XML-RPC, or other technologies are more appropriate for the situation that RESTful Web Service Deisgn (which is blasphemy to many). Once I needed to create a client and web services for a customer that might get used 3-4 times a month by a single user. On top of that, that web service would maybe serve up 300 API calls per month, if that. I didn’t need the Caching, Statelessness, and Scalability that REST can provide, so I used SOAP. One hour later, it was working great, and has been for the last five years. I didn’t have to use SOAP, I could have used simple HTTP POST’s with command names to get the job done.

The bottom line is REST is based off principles. Part of following principles is it is up to the developer on how to follow those principles, unlike a standard which instructs the developer how to follow it.

HTTP Methods

The notion of using GET, PUT, POST, and DELETE were not explicitly outlined in Roy’s dissertation about REST. It did mention GET as an example once, but the guidelines for using each of those HTTP methods was put together by other developers trying to make their Web Services more RESTful. The logic is sound, putting more information in the HTTP headers so you can cache earlier on the Application layer (example, have your web server pull directly from a cache instead of loading PHP to pull from a cache), but if a Web Service only implements GET and POST, that just means they are choosing not to try and take advantage of Caching and Layering for PUTs and DELETEs. It does not mean their web services are not RESTful. They might be a little less RESTful, but using those four HTTP Methods as a hard requirement is a misnomer.

The worst thing that has come out is many people when teaching about RESTful web services use an example performing CRUD operations. While in-an-of-itself not terrible, this leaves students without good examples of how to perform non-CRUD operations. Coupled with the extreme zeal about GET, POST, PUT, DELETE, they try to orient their entire web service towards CRUD operations, and ultimately they don’t work well. RESTful does not equal CRUD.

URIs & Implementation

I think when I read a discussion about RESTful web services, most of the discussion is about the URIs, which is ironic since it is the only one of the six constraints of REST. I think the reason is because most of these people are complaining about trying to implement others’ web services. They have the thought “Oh, if they only named the URIs in this fashion it would make more sense.” However, the style and look of these URIs are not the issue, its the uniformity of them that is important. Just like coding standards, you try to follow them as best as you can, but sometimes your names or conventions aren’t perfect. They say hind sight is 20-20, and designing web services is no different. You will make mistakes, and things won’t be perfect. The difference is you can easily refactor your code. Refactoring production URIs is much more complicated, and can break other developer’s code relying on your web services. Trust me, if there is a lesser of two evils, its not having a perfect naming convention. Pretty URLs are nice, but not an absolute.

Bending the Rule, Keeping the Spirit

I had an English professor tell me that the basic rules given to students for writing essays were to help them understand the spirit of the rules. You have an introduction paragraph, outlining your points, followed by a paragraph for each point, and concluded with a summary and statement of your purpose. She said once you understood the spirit of the rules, like keeping your paper focused, well organized, and easy to follow, then you could actual bend or break the rules because you would still follow the spirit.**

The same, I believe, goes for RESTful Web Services. Twitter seems to be the example everyone likes to use to point out what “fake RESTful” looks like. Yet, for the most part, these web services are very successful even though they are not perfect. So if my naming convention isn’t perfect, but my documentation is excellent, clear, and easy to navigate, then its alright. You keep the spirit of principle while you may not be keeping the “letter of the law” written by others about REST.


Ultimately, people are still going to argue about REST. Some will treat it like a specification, a protocol almost. They will talk about how everyone else is not using RESTful web services, yet not show in production how to really implement them. Some, like the author of the presentation I linked to above, will show better ways to be more RESTful with your web services, and really help everyone learn.

Myself? I will think of REST and RESTful like Nirvana: a goal to reach and design for. Will my web services conform to the strictest interpretations and thoughts on RESTful design? Probably not. Will I lose sleep over that fact? Not a chance. Do I look forward to trying to make great web services using the principles outlined in REST? Absolutely.


* – I hope I don’t offend anyone if I incorrectly portray Nirvana and it’s importance to the Buddha faith. I picked it as an example as from my own personal understanding of it.

** – This article shouldn’t be viewed as an example of my English professor’s teaching skills, since I didn’t spend nearly as much time writing it out as I do for full articles. She was an excellent teacher, and taught me how to write very well (at least I think so, I hope). :)

Tagged with: , , ,
Posted in Articles, Programming, Technology

PHP, Nginx, and Output Flushing


Alright, so one of the few hangups I’ve ran into with moving from Apache to Nginx was output buffering. Now, we have a few administrative tools we use to perform large operations on our library and data. These scripts can take normally 3-5 minutes to run, and they output their progress and what step they are on as they run. The way they do this is after ever step in PHP I issue the same two commands:

ob_flush(); // Flush anything that might be in the header output buffer
flush(); // Send contents so far to the browser

Well, with Nginx, it will wait for an entire response from the PHP-FPM instance before sending data to the browser. This is because traditionally the time to generate the response is less than the time to send the response to the browser, so Nginx will let the CGI instance finish as quick as possible to free it up for other requests. Well, even if I called ob_flush() and flush(), Nginx would wait for the entire response before sending it to the client’s browser. So, for our staff panel, I had to disable this buffering. It took a lot of scouring the web, but I finally figured it out:

Nginx Configuration

You need to set a few variables. I couldn’t actually figure out how to disable the buffer for Nginx, but I could set it very low on a per location configuration.

So I have the following location configurations:

    location ~ \.php$ {
        include /etc/nginx/fastcgi_params;
        fastcgi_index index.php;
        fastcgi_param  SCRIPT_FILENAME  /path/to/public_html/$fastcgi_script_name;
        fastcgi_read_timeout 600;
        fastcgi_buffer_size   1k;                              
        fastcgi_buffers       128 1k;  # up to 4k + 128 * 4k
        fastcgi_max_temp_file_size 0;
        gzip off;

The important configurations are:

fastcgi_buffer_size   1k;                              
fastcgi_buffers       128 1k;  # up to 1k + 128 * 1k
fastcgi_max_temp_file_size 0;
gzip off;

So I set the fastcgi_buffer_size and buffers to 1k. Then, you need to set the max temp file size to 0. Nginx by default will start to buffer on the disk. By setting it to 0 it will send it to the browser. The last piece of the puzzle I missed was turning gzip off, because it would try to buffer, even if it exceeded 1k, to compress the response.

Now, to get it to work, in my php script I echo out 1k worth of html commented out text to ensure everything else after it gets sent to the browser.

Now, I don’t recommend these settings for busy productions servers. However, only 3 people use this staff panel, so the performance differences are extremely low.

Tagged with: , , ,
Posted in Programming

Debuging with PHP, Stack Traces, and Redis


A cool new trick I found with debugging pieces of code and how often they get executed. We were running into a problem where an expensive query was being called more frequently than I thought it should. This query was only found inside of a single member of a class. However, this class, which retrieves a user’s photo album, is used all over the place. So it was really hard to determine exactly where the unneeded queries being executed from.

This is where Redis comes in. I hate doing any extensive logging to disk on a production server, however sometimes you need to do some logging to track down a bug. With Redis, it instead would be logged to memory and periodically written to file. So it was very, very quick, and did not slow down anything on production.

So to log where how many times our class was being called, we used PHP’s debug_backtrace() function. This would return an array with each row holding information about the stack trace. Using this, we could identify the different parts of our program that was using our class and this SQL statement. I used Rediska as a PHP library to communicate with Redis.

So, inside of our class right before the SQL statement, I included this code:

// Get Instance of Rediska for the Debug Connection
$redis = new Rediska_Manager::get('debug');
$stacktrace = debug_backtrace();

// Loop through the stacktrace
foreach($stacktrace as $k => $v)
    // Unset args and object variables, since they will throw off our hash calculation.

// Start an output buffer to capture the var_dump;
$print = ob_get_contents();

// Hash what it returns, since it will the same each time for each stacktrace
$hash = md5($print);

// Increment the count
// Set the Content of the Stack Trace so it can be read later.
$redis->set('debug.photoAlbum.content:'.$hash, $print);

Few important lines to note. We unset ‘args’ and ‘object’ because it will unique for each function call, we just want to know where the function is being called. We then set two values to redis, one to hold the count, and another to hold the content.

Now, to read the data, I just made a PHP script:

// Get Instance of Rediska for the Debug Connection
$redis = new Rediska_Manager::get('debug');
// Get all the keys matching the patern for the counts.
$keys = $redis->getKeysByPattern('debug.photoAlbum.count*');

$sorted = array();
$strings = array();

$total = 0;

// loop through the keys to strip out the hash
foreach($keys as $key)
    // Get the hash part
    $parts = explode(':', $key);
    $hash = $parts[1];

    $count = $redis->get('debug.photoAlbum.count:'.$hash);

    $total += $count;

    $string = "Count (".$hash."): ".$count."\n"

    $sorted[$hash] = $count;
    $strings[$hash] = $string;    

// Reverse Sort by Numberic numbers
arsort($sorted, SORT_NUMERIC);

echo "Grand Total: ".$total."\n\n";

foreach($sorted as $hash => $count)
    echo $strings[$hash];

A few notes, first off, the KEYS function for Redis has some limitations. Mainly, it should be used sparingly, and only for admin pruposes. The documentation explains it pretty well.

Second, this script will sort by count and print the count along with the content of the stack trace.

This helped me identify a situation where it was calling the class several times instead of just once. The bottom line is, after figuring out the problem spot, we were able to fix it, saving us several thousand expensive queries every hour. I highly recommend this technique for debugging issues on production servers. Its working well for me.

Tagged with: , , ,
Posted in Programming

MySQL Sleeping Connections & PHP


If you want to skip to the explanation, just read below. But before then, here is a little background.

A Little Background

On January 2nd, we started to run into some serious performance problems with Dating DNA, so we began the process of going and deciding what were our problems, and how to optimize them. At the beginning, the Database was the bottleneck. It would get flooded with requests, and unable to handle them all, each query would slow down. We were using a tool called Jet Profiler (which I will post about more in detail later), but here is a graph it would output before we started out optimizations:

The light blue is total threads connected. The dark blue is thread running a query. The red are threads that are taking 2 seconds or longer to run, which are slow queries. The red is bad, very bad. So we were getting in bad shape. It was lovingly nick named “The Red Zone” while we were working on optimizations. Now, granted, we weren’t in “The Red Zone” the whole time, but when things got busy, things would slow down.

But optimization after optimization, we started to get things more and more under control:

After about three days of optimizations, we get back down to a manageable load:

The Problem

However, when the website was having high traffic, we noticed an anomaly, which looked like “blue waves.” We lovingly gave them the nickname of “blue meanies” from the Beatles’ Yellow Submarine Cartoon. On Jet Profiler, here is what it would report:

It would get worse and worse, these big blue waves of connections reporting as “sleeping.” At first we didn’t think it would be a problem. However, when ever we had “blue meanies” the site and iPhone app felt really slow. So, I won’t cover all the things I tested and tried that didn’t work, but he is ultimately how we figured out our problem.

The Solution

At first, I thought it was an issue with Garbage Collection with PHP. So I set the wait_timeout on MySQL to something really low, like 5 seconds. We then started to get errors all over the website, so we knew that they were legitimate connections from PHP. The only thing that made sense is that PHP & Apache now had become the bottle neck, that MySQL was returning requests so quickly that the threads were almost always sleeping, waiting for the PHP to finish. We slowly started to disable different functions on the website, trying to narrow down if there was a particular part of the website that was causing it. After a few hours, we figured out the feature: the ChatWalls. So we started to investigate why turning off the ChatWalls would make MySQL run faster, since we had moved the ChatWalls completely off MySQL and to run on Redis.

What we found is one particular function had a typo, that would cause PHP to iterate over an array not 10 to 20 times, but 1,000-2,000 times or more. This function was also called a lot by several Ajax calls. So, I fixed the typo, and the blue waves went away.

What was happening is Apache & PHP were spending so much time processing the buggy function, that it would cause the rest of the web requests to slow down greatly. That would keep open may too many MySQL connections, causing the blue ways, and slowing down the website even more.

So in reality, the blue waves were a symptom of the problem, not the cause. It is the whole Correlation vs Causation situation (which I probably should blog about in more detail when it comes to finding performance issues).

So if you have a lot of sleeping connections, but MySQL is performing well, most likely it is PHP or Apache slowing things down. I hope this can help those having a similar problem. As for our Database, its working well now. A few more problems to iron out, but it is running really fast. The few red spikes are from the score generation system that are doing bulk inserts, and do not slow down the end user experience:

Tagged with: , , , , , ,
Posted in General, Programming

Creating Chatroom / Walls with Redis & PHP


Preface: This is not a step-by-step tutorial, but more of an outline on what I did. Also, I wrote this really quickly before heading off to dinner, so if there are parts that are unclear, or you have questions about, please leave a comment!

This last week has been a roller coaster for us at Dating DNA. We had an excellent holiday season, but with mass volume of new sign ups our servers started to slow down. Severely. Something you never want to have happen. So over this last week I’ve worked about 80 hours (though I’m a salary employee, woohoo… :P ) and implemented a lot of new performance boots, and I have about four or five blog posts worth of discoveries I need to write. So before I forget them, I’m going to write them.

The first change I’d like to talk about is moving our ChatWalls (kind of like a Chatroom and Forums/Walls mashed up together) from a MySQL backend to a Redis backend ( This initial change actually only took about 4-5 hours. Working out some additional kinks we found with our ChatWalls took another day or so, but the Migration to Redis was extremely smooth.

First, to give you an idea about our ChatWalls, here is a video of myself using them:

Redis, in a very simplified explanation, is like Memcache as a Key Value storage in memory. However, it is persistent (data not lost after restart), has more advanced data types (hashes, sorted sets), has built in virtual memory options (entire dataset doesn’t have to always be in memory), and you have a lot of cool operations beyond what memcache has (return sorted sets by score, purge out old records from a set, etc).

Now, I’ll walk through the basics of installing Redis, getting it up and running, reading and setting data in Redis, and migrating our old content.

Install Redis

Ok, compiling and installing Redis is a dream. It is very easy. On this server it is running Ubuntu 8.04 LTS. You go to the Redis Website and click on downloads. Get the latest stable version, and download it to the server (I used wget). Untar it (tar -zxvf /path/to/file.tgz), and go into the source code’s directory. Now, Redis uses a noticeably less amount of memory when compiled for 32bit versus 64bit due to pointer sizes (see FAQ). So to compile on a 64bit linux machine, you need to install libc6-dev-i386, which on ubuntu was “aptitude install libc6-dev-i386″.

Then, it was a simple “make 32bit” and “make install” and it compiled and installed redis in /usr/local/bin for me. Now, I wanted it to start up for me automatically when I rebooted the server, so I saved this init.d script as /etc/init.d/redis. Then I copied the sample configuration file out of the source file to /etc/redis/redis.conf. I ran “sysv-rc-conf” to set the Runlevels for it to execute the script at (2, 3, 4, and 5). If you don’t see “redis” listed, make sure you chmod +x /etc/init.d/redis so that it is executable.

Then I ran the command “/etc/init.d/redis start” and it was running.

Redis & PHP – Rediska

Now, while looking for a PHP Redis library I found Rediska. The documentation was pretty basic and straight forward, and I was able to get it installed pretty easy into my PHP App. Now, this library isn’t perfect, I found a few bugs while implementing it, especially with some values legal values for ZRANGEBYSCORE as defined by the Redis documentation, but Rediska would throw exceptions because I was passing non-int values. I’ll submit some bugs with patches to them next week so they can update Rediska.

A few things to look over in the Rediska library: first off, read how they manage multiple instances of their Rediska class. Its a little odd, but it works. I just don’t like passing an array with options in every command I have to make. So I would try something like $rediska->getHash(‘key_value’); and expect it to return a Rediska_Key_Hash object, but it would just return an array. So, in the end, I ended up using very little of the other Rediska classes, and stuck to just using the Rediska class and it’s methods. The good thing is they have good phpDocs, so if you have a good IDE, it will be easy to know which method does what.

Designing for a Key-Value Database Store

Now, I highly recommend reading this article on the different Redis Data Types. Also, give a look over their PHP Twitter Close with Redis, since they show some of the techniques for using a Key-Value as a data store.

So the very, very first thing you must do, and we did, is map out how you will design your application for Redis. You don’t have tables and columns any more, just a really big array with some cool tricks. So unlike MySQL if you have a typo in your table or field name, you won’t get errors. It will just run, but you application will have some serious bugs. So pick a naming convention, and document exactly the “tables” and “keys” you’ll be using, with what data types, and stick to it. If you don’t, you will be hating life. >/soap_box<

I like the following naming convention. Separate word groups by periods (.) and preface changing variables (like an ID) with a colon (:). So, here some map definitions for our keys:

chatwall.viewers > Hash[user_id] viewer_json – A hashtable with a key of the user id and a value of the member’s json object. This will hold the entire list of all users.

chatwall.wall:{wall_id}.viewers > Hash[user_id] viewer_json – A Hashtable that will hold a list of all the viewers for a given room. Example key: chatwall.wall:382.viewers

chatwall.nextPostId > Int – A int we will auto increment to get unique post IDs

chatwall.posts > Hash[post_id] post_json – The data for a post in a json object.

chatwall.wall:{wall_id}.posts > SortedSet We will store the score AND member as the Post ID. This will allow us to quickly get the last 50 posts in a set, and even pass a minimum Post ID to only get new ones.


cache.chatwall:{wall_id}.information > Json String – This will hold a json string of the cached wall information from MySQL.

These were just some of the definitions. Now, these definitions are for documentation only. You can use whatever naming conventions on the fly and Redis will use it. But it will be extremely useful if you have a list of what the key names are.

We also prefaced any “cached” data, so data that we treat like it was in memcached, with the string “cache.” The reason being if we wanted to clear any cached data, we easily could do so with a custom script. But we don’t want to confuse any of our good data we want to stay persistent being deleted on accident.

Refactoring the Code

Now, one thing that I was extremely grateful for when I designed the ChatWall system is each type of data had a PHP class with members, and functions that would perform the CRUD operations. So it was extremely easy to change the CRUD functions to read/write to Redis, and instead of writing out a query, I generated a key and passed for the value the variable $this. Super easy. Having all the data access code encapsulated correctly made it easy to simply go down my class methods and re-write them.

Now, there were a few caveats. One was moving posts, because an admin could move a post to a different forum or chatroom. Because we depended on Indexes from MySQL to catch the move, on our update commands we had to check the old record first to see if the room moved. If it did, we would have to remove an entry from the old chatwall.wall:{id}.posts and put it in the new one.

Migrating the Data

I simply wrote a script to select the data from our MySQL tables, set them to the new Classes, and called the “InsertRecord” function. In 10 seconds the script had migrated all of our data to Redis, which was taking up 25M of data in Redis. Pretty awesome.


I couldn’t be happier. It is really, really fast now. We had to double check our php code to prevent any calls to the Database, and make sure Redis had all the information it needed. Once we fixed a few areas, and our ChatWalls calls were being all 100% being executed against Redis, it was blazing fast. I am really, really happy with the change.

I would recommend getting and installing the redis-tools for your server. They make it really easy to see what your Redis server is doing. Our server is currently serving up 200-500 requests per second, and using up about 2-5% of our CPU with 20-30MB of ram. The background saves take 2-3 seconds to perform.

Bottom line, I highly recommend checking out Redis. I wouldn’t recommend replacing MySQL with it, but using it along side MySQL to handle cached data and highly accessed/changed data. It has been a huge performance boost for Dating DNA. We are in the process of finishing migrating our Match Score System to Redis, which is quite a bigger project. 500 million rows big. But it is going great, and looking to finishing with that project soon.

Tagged with: , , , ,
Posted in Articles, Programming, Technology

Using Cloud Files w/ CDN for Clipish Library

Over the last two days we’ve moved our Clipish Library from serving the files locally on our web server to Rackspace’s Cloud Files service. It is similar to Amazon S3, however since we were already using Rackspace Cloud Servers, we thought it would be easier to just keep it all under the same roof. The actual changes in code for using Cloud Files only took about 2 hours, but I had to write a script to loop through our entire library and populate Cloud Files with our current library. What is great is in the future when we add items, our code will automatically add files to Cloud Files via their API. Here is our network graph after switching (Item #3): Read more ›

Tagged with: , , , , , ,
Posted in Technology

Goal for 2011: Learn C


I feel this is one area of my computer science education that is rather weak. The years I spent in schooling learning C, C++, or another derivative, either my teachers / professors really didn’t teach it well, or were only around of half a semester. Truly, computer science teachers when I was in school were the “professor of defense against the dark arts,” as they were always changing, even mid semester.

Also, I’ve never had a very strong need to learn C. Almost all of my work is web based, and so traditional LAMP skills were more than enough. However, things are changing in the industry, and there are a few reasons why I want to polish up on my C and re-learn it.

First off, there are a lot of new, cool technologies coming out for web development. The whole NoSQL concept and the different solutions for it are being written in C based languages. Even after all these years, many are being written in C. Now, I doubt I would be willing to write my own NoSQL solution when there are great ones out there, but I would love to contribute bug fixes, or be able to read and understand on a lower level how they work.

Second, I’ve been cautious or nervous compiling my own binaries on Linux. Almost always, if a new PHP release came out, I would want to wait for my distribution to release and update. Sometimes, that can be months, and even years. By learning C and learning more how it works, I hope to be able to compile my own binaries when needed, and not feel so hopeless if something breaks.

Third, PHP is written in C. There have been times where I’ve wondered exactly how a specific function or worked, or have found a bug I would love to submit a fix for. Its amazing how large the PHP community is, and yet how few contribute to help. I would love to become a code contributer, and learning C is a prerequisite definitely. Also, with web development and PHP, sometimes certain tasks could be performed much quicker when compiled, such as the score generation system for Dating DNA. It would be great if I could generate millions of compatibility scores at a C level, instead of just in PHP.

Fourth, is iPhone development is not going anywhere but up. We pay the bills and more with our iPhone sales for Dating DNA and Clipish, and having the option to do some iPhone Development when needed would be great. Granted, iPhone Development is done in Obj-C 2.0, but when I first tried iPhone development, almost every single one of my problems were from a fundamental lack of understanding on how Obj-C 2.0 worked. A better of understanding of C would help me greatly in this.

Firth, it would just be good to learn. Even if I don’t use it a lot, learning the programming language that is used by so many of my tools would be great. So, along side my posts on Redis, PHP, and web service development, you might be seeing on beginner information on Learning C.

Tagged with: , , ,
Posted in Programming

Server Monitoring with Wormly


I know I’ve been behind in my posts, so I quickly wanted to catch up with a few I’ve been meaning to post. The first is my recommendation for a service called Wormly. Now, I’ve tried to run my own Nagios servers and such, and they work, but here is why I like Wormly.

  • First, is simple and easy to use. The interface is great, and covers all the basics.
  • Second, downtime and server health monitoring in one package. I’ve always thought these two should go together, but it wasn’t that easy to do on my own with the tools I had. With Wormly, its super easy to set up. You just install a small one-file PHP script.
  • Third, its cost effective. If you are monitoring hundreds of servers, you would probably want to roll your own to integrate it with your own deployment systems. However, if you have just a handful, or even just a few dozen, it is very cost effective.

The bottom line is there is no reason not to have a server monitoring tool in place, and Wormly has been a great tool. We’ve used it to monitor Dating DNA’s and Clipish’s servers for several months, and it is worth every penny. I highly recommend it.

Tagged with: , , , ,
Posted in Technology

New Job: CTO of Dating DNA


I’ve had a few friends ask me about my current employment, and if I ended up switching jobs. So I thought I would answer them here, or at least have somewhere to point them to. The answer is: yes and no.

A quick recap: a few months ago I was given the chance to apply for a position at ARUP Laboratories, which is the medical testing facility for the University of Utah. It is a great company to work for, and listed by Forbes’ as one of best companies to work for. During the interview process, when I made it past the initial rounds and was invited down to interview in person with the developer team, I told my biggest client (Dating DNA) I was interviewing with ARUP. If I got the job, I wouldn’t be able to do the same volume of work they needed from me, and they would have to find someone else.

My primary reason for trying to find a new job was Joanna and I are trying to start our family, and when we have kids I want Joanna to be able to stay at home. With my current work situation, I couldn’t do that reliably. Fortunately, Dating DNA has been having some really good success over the last few months, and was able to come back to me with an offer that would allow me to gain the fiscal security for my family, and do a job that I love. ARUP made their offers, and it was a hard decision because my potential co-workers at ARUP were really great.

But at the end of the day, I officially accepted Dating DNA’s offer, and accepted the position of Chief Technology Officer. What does this mean? We’re a small company, and my title could be “Master Wizard of the PHP Order” and things could more or less be the same. First off, instead of being an odd hybrid between employee and contractor, I am a full employee. When we look to expand to expand our developer resources, it’ll be my responsibility to make sure we bring the right people in the right places. At the end of the day, I’ve basically been doing the job of CTO for quite some time for them, so we might as well make it official.

Some of the biggest reasons I decided to stay with Dating DNA is we have a lot of great growth challenges coming up, and I’m excited to solve them. I get to continue to work with and evangelize PHP and Friends as a great platform to build web technologies on, and I get the flexibility of telecommuting. We’ve also discussed and are starting to implement procedures so I won’t have to be on-call all the time (those who are good friends of mine know I’ve had some seriously inopportune work emergencies). So all in all, I think Dating DNA is getting a great deal, and so am I.

Thank you to all my friends and colleagues who I spoke with and gave me advice. I think moving forward this is a great opportunity. Stay tuned for my blog posts on how we solve scaling 500 million match records, scaling our web service APIs, and how we’re going to refactor a bunch of old, buggy legacy code to something more manageable.

Tagged with: , , ,
Posted in General