Monday, March 17, 2008

9 Practical Ways to Enhance your Web Development Using the Firefox Web Developer Extension

Whether you’re a front-end graphics designer or a back-end web programmer, if you’ve worked long enough in the field of creating web-based solutions, you’ve no doubt heard about an extension for the Mozilla Firefox web browser called (simply enough) the Web Developer extension. If you have no clue what I’m talking about, here’s a brief overview from Webs Tips to get you familiarized with this wonderful tool.

A screenshot of the Mozilla Firefox Web Developer tool

This article lists some practical, everyday uses of the Web Developer extension to help improve your web-building methods. I’ve tried to stay away from the more common and basic uses of the Web Developer extension like troubleshooting layout issues with the Information > Display Div Order option because I feel these have been discussed quite enough in other places. New users, don’t run away quite yet, I think this guide will help you get a rapid jump start into applying this tool into your daily development routine.

So without further ado, here’s nine highly pragmatic uses of the Web Developer extension for Firefox.

1) Change XHTML on-the-fly without changing your web files.

Unfortunately for many developers, we don’t all have the luxury of testing servers and sandbox environments. I for one, confess to developing on live websites even during peak web traffic times.

If you’d like to lessen customer support requests due to an inadvertent display:none; property assignment on the log-in box — use the Web Developer extension to effortlessly check your XHTML modifications before you commit them to the server.

Here’s an (extreme) example of how I was able to change a few of reddit’s XHTML markup.

The original front page:

Screenshot of reddit's front page before editting XHTML markup.

And here’s the modified version:

Screenshots of reddit's front page after changing some XHTML markup.

As you can see in the above picture, I changed the top three stories (to something else I’d much read about) and modified the background color to pink (I have an odd affection towards hot pink for some reason).

You can achieve the same results by using the Miscellaneous > Edit HTML Markup option which will open up the Edit HTML tab panel displaying the XHTML of the web page. Unfortunately, the window isn’t color-coded and the Search HTML function doesn’t quite work properly (yet).

A screenshot of the Edit HTML Panel, Displayed on the left of the page.

Tip: You can change the position of the Edit HTML panel by clicking on the Position icon (right next to the Edit HTML tab on the above screenshot).

To change the CSS styles of the page, use the CSS > Edit CSS option, which will allow you to edit the styles used on the web page.

2) Measure things quickly with the Ruler Tool.

Raise your hand if you’ve ever print-screen’ed, and then copy-&-paste’d the screenshot onto Photoshop just to determine dimensions of certain page objects (like the width of an image) with the selection tool. *Raises hand in shame*

With the Ruler Tool (enable it via Miscellaneous > Display Ruler Tool), you can speedily size-up objects inside the web browser. It’s a great tool in conjunction with outline options such as Information > Display Div Order option or Information > Display Block Size option, allowing you to detect the amount of padding and margin between elements.

Screenshot of the Mozilla Firefox Web Developer extension Ruler Tool.

3) See how web pages look on a non-traditional web browser.

Nowadays, tons of people have mobile devices that lets them view web pages in non-traditional ways. Determine whether your pages render correctly (or close enough) on portable device screens by using the Miscellaneous > Small Screen Rendering option. This saves you from going out and purchasing a Blackberry or a Trio with an internet dataplan just for cross-browser checking.

How the Gamespot website looks on normal browsers:

A screenshot of viewed through Mozilla Firefox web browser.

What it will look like on a Small Screen Rendering device…

A screenshot of rendered in a Small Screen Rendering Device as simulated by Mozilla Firefox Web Developer extension.

4) Find out how optimized your page is.

Use the Tools > View Speed Report option to automatically send your page to, a site that provides a plethora of information about your web page load times like how quickly your page loads and how many HTTP connections are being used among a ton of other things.

There are built-in tools in Adobe Dreamweaver and Flash (if you even have access to them) that simulates download speeds, but nothing beats a free, comprehensive and actual live speed report.

Screenshot of the result of Six Revision's front page speed report from Web Optimizer

5) Populate web form fields instantly.

Don’t you hate it when you have to fill in your custom-built web form for the nth time because you’re testing it? You can quit tabbing and entering junk information on your form fields and switch to using the Form > Populate Form Fields option in the Web Developer extension.

In the example below, you can see that it populates most web forms somewhat intelligently – It was able to guess the email field — but missed the phone number field.

Screenshot of eBay's registration form automatically filled about using the Forms - Populate Form Fields option of Mozilla Firefox Web Developer extension.

6) Find all the CSS styles that affect an element.

For most fairly-proficient CSS developers, it’s quite easy to find the exact selectors that style an element’s properties - fyi: #selector { property: value; }. This is especially true when you’re the original author and/or the styles are contained in one stylesheet.

But what if you were working on someone else’s project… and the project in question has 1,000+ lines of pure CSS goodness, split into several external stylesheets (because Bob a.k.a. “Mr. Modularity” likes to keep things “simple“)? Another scenario you might encounter is being tasked to theme a content management system like Drupal or Wordpress and you’re not quite sure where all the external stylesheets are.

For example, the Yahoo! home page has over 2,400 lines of CSS, spread over several external stylesheets and inline styles (Bob, you built this page didn’t you?).

Screenshot of Yahoo! front page with CSS - View Style Information of Mozilla Firefox Web Developer extension being used.

If you’re tasked with revising this page, you have two choices: (1) look through, understand, and hunt down the styles you need or (2) decide that you’re smarter (and lazier) than that and so you use the CSS > View Style Information option of the Web Developer extension. With this option enabled, clicking on a page element opens up the Style Information panel which displays all the styles that affect the element.

7) View JavaScript and CSS source code in a jiffy.

One of the ways I troubleshoot rendering issues is by looking at how other web pages do it. JavaScript and CSS are often divided into several files — who wants to look through all of them?

Using the Information > View JavaScript and the CSS > View CSS options instantly displays all the JavaScript and CSS in a new browser tab. This has the side benefit of being able to aggregate all the CSS styles or JavaScript in one web page allowing you to use the Find tool of the Mozilla Firefox browser (keyboard shortcut: ctrl + f for PC users).

8) See how web pages are layered.

It’s often very helpful to determine which page div’s and objects are on a higher plane. Using the Information > View Topographic information gives you a visual representation of the depths of the page elements — darker shades are lower than lighter shades of gray.

Original web design…

Screenshot of before using View Topography Information.

Using the Topographic Information option renders the page to this:

Screenshot of a webpage with Information - View Topographic Information enabled.

9) See if your web page looks OK in different screen sizes.

I use a monitor size between 19 – 22 inches (wide screen). This can be problematic because many of our visitors use smaller monitors. Short of switching to a smaller LCD screen to simulate the user experience, I just use the Resize > Resize window option. It helps test whether my fluid layout works well in smaller windows (sometimes you forget to set min-widths for div elements and it jacks up the layout in smaller screen sizes), or if your fixed-width layout displays important content without user’s having to scroll.

Be sure to enable the Resize > Display Window Size in Title option to help you determine the exact dimensions, and also for documentation purposes when you’re taking screenshots of your webpages.

Screen shot of with the width of the page set to 800 pixels.

So there we are, nine ways you can employ the Mozilla Firefox Web Developer extension to better your web development experience. I don’t claim to be an expert, but I certainly know enough about the Web Developer extension to improve my web-building speed.

Do you have other tips and strategies on how to further utilize the Web Developer extension? What are the ways you use Web Developer extension in your job? Share them here.

Related links:

Making Secure PHP Applications

There are 2 basic types of attacks that a cracker will try to gain access you don’t really want him to have. This lesson runs though what the cracker does and how you can fight against. This is really anything but a definitive guide to security, there is no possible way to cover security in a 4 page article. But this is a good start to security. To conclude the introduction, XSS attacks are too broad of a range and wont be covered here.

Generally, scripts that don’t have their source revealed to the public are harder to crack. Scripts which you can get the source from always have to take more precautions. Either way, the same precautions should be taken, not giving the source doesn’t make it uncrackable. Security is extremely important, I learned that when I had a bug in img911’s script that allowed php files to be uploaded. There was a script that gave the uploader full control of my files, the site was only a week old! People will come to your site and try to gain access if they can. It is a matter of time before it happens to you, are you ready?

Attack One: SQL Injection Attacks
What it is
This is by a good margin the most common type of attack because of its shear power, that and its easy to do. SQL injection attacks inject commands via user imputed data that could cause damage to your database.

How it works
SQL injection attacks happen when you modify data that is being sent into a database, for instance
You could change that to
showimage.php?id=1′; DELETE FROM images WHERE 1;
That would create an error for the sql if there is any command passed the WHERE clause, but the DELETE command would run and work. This gives the cracker full control of your database, anything you could do with mysql_query, he can do. He doesnt just have to use get data, he can use POST data that is being sent from a form, he can also edit cookies that the site uses.

How to prevent
There are a few ways to prevent an SQL attack.

Method 1, Clean the data
The first way is to strip slashes, quotes and other things that have no legit purpose in the query. THIS IS NECESSARY IN ANYTHING THAT IS USER INPUTED AND WILL BE USED WITH A DATABASE! User inputted data is anything that can be edited from the outside, GET data (.php?getdata=data), user inputted POST data and cookies can all be edited by a user. Anything coming from those must be cleaned or your script is not safe. I use this function to clean my data

function sql_safe($value)
// Stripslashes
if (get_magic_quotes_gpc())
$value = stripslashes($value);

// Quote if not integer
if (!is_numeric($value) || $value[0] == ‘0′)
$value = mysql_real_escape_string($value);
return $value;

Just make all your user inputed data data like this

$var = sql_safe($_GET[“data”]);

This way, all invalid data is stripped out and your database is safe. This method is not an option, every application you make must have this or equivalent code, this is the only sure fire method that cant be hacked. If you do this with all your data, your site is safe from SQL injection attacks. The next two methods are icing on the cake, not hacker proof methods.

Method 2, Table prefixes
Fact of life is everyone makes mistakes, chances are pretty high you will forget to clean one user inputted data down the road, as small as the chance is, a cracker might find it. Most programs use standard names for their database, such as in a forum the table that holds the posts would be named “posts”. Crackers know this and will try every relevant name to the type of site it is. The best way to get around this is to add a prefix to your table name, instead of “posts”, make it “forum_posts”, even a common prefix like that makes it a good deal harder to hack. I use the first 3 letters of my control panel login name as my db prefix. Do not rely on this method, it just makes it harder for a cracker to get in should you miss a step.

Method 3, Don’t give sql user delete rights
This method is anything but a strict guideline, most of the scripts I make require deleting rows. But if it isn’t needed and you don’t have to delete rows, don’t give the user permission to. This will make it so that if you forgot to clean and he got the prefix, he can’t delete anything. Use it when you can, but don’t change a script to cater to this, it is little more then a final precaution.

Attack 2, forged data

What it is
Forged data is when you edit a cookie to make yourself look like an admin. The only way to let this happen is flawed design, generally static data or not confirming the data is question. As rare as it may seem, it is an error Ive seen allot, even in some scripts that are for sale. This one is more obscure, but it can happen if you encounter the wrong person. This can be worse then sql injection attacks because it is less apparent; you don’t have to destroy anything. Lastly, you normally have to have access to the script to do this, but not always.

How its done
Cookies can be edited easily, they are just text files on your computer. A cracker will go in and change the data to imitate what the script thinks is an admin. In firefox, to view the cookie all you have to do it go to tools -> options -> privacy -> view cookies. To change them you have to shut the browser off and go to C:\Documents and Settings\name\Application Data\Mozilla\Firefox\Profiles\profile name\cookies.txt

All your cookies reside in there, it is not encrypted so you can change anything you want to.

How to protect it

A lesson in how to make secure login cookies
What many scripts do is lay cookies like this
cookie 1:

cookie 2:

cookie 3:

This makes it easy to see if they are an admin or not, but there is one huge problem, all you have to do is change the rank cookie to what it looks for with an admin and you are in. This is a flawed design, what you have to do is this

cookie 1:
user ID

cookie 2:
user pass, encrypted via sha1 (you use sha1 when storing it in the db, right?)

When it sees a cookie, it will go into the database and see if the user with that id has that password. If it does, the user is who it says it is, you can see their rank in the database. If the data doesn’t match, you delete the cookie and they are off. The only way to hack this is to know the pass, and if the cracker knew that he wouldn’t be going in the back door.

Hold as little data as you can in the cookie, all it needs to do is provide solid data that the user in question is legit, not what rank they are.

I hope this article will help you make more secure PHP applications, if their is anything that has slipped my mind for this, please drop me a pm.


Feel free to post this anywhere as long as the below line is here
Originally written by Village Idiot of

Understanding memory usage on Linux

This entry is for those people who have ever wondered, "Why the hell is a simple KDE text editor taking up 25 megabytes of memory?" Many people are led to believe that many Linux applications, especially KDE or Gnome programs, are "bloated" based solely upon what tools like ps report. While this may or may not be true, depending on the program, it is not generally true -- many programs are much more memory efficient than they seem.

What ps reports
The ps tool can output various pieces of information about a process, such as its process id, current running state, and resource utilization. Two of the possible outputs are VSZ and RSS, which stand for "virtual set size" and "resident set size", which are commonly used by geeks around the world to see how much memory processes are taking up.

For example, here is the output of ps aux for KEdit on my computer:

dbunker 3468 0.0 2.7 25400 14452 ? S 20:19 0:00 kdeinit: kedit

According to ps, KEdit has a virtual size of about 25 megabytes and a resident size of about 14 megabytes (both numbers above are reported in kilobytes). It seems that most people like to randomly choose to accept one number or the other as representing the real memory usage of a process. I'm not going to explain the difference between VSZ and RSS right now but, needless to say, this is the wrong approach; neither number is an accurate picture of what the memory cost of running KEdit is.

Why ps is "wrong"
Depending on how you look at it, ps is not reporting the real memory usage of processes. What it is really doing is showing how much real memory each process would take up if it were the only process running. Of course, a typical Linux machine has several dozen processes running at any given time, which means that the VSZ and RSS numbers reported by ps are almost definitely "wrong". In order to understand why, it is necessary to learn how Linux handles shared libraries in programs.

Most major programs on Linux use shared libraries to facilitate certain functionality. For example, a KDE text editing program will use several KDE shared libraries (to allow for interaction with other KDE components), several X libraries (to allow it to display images and copy and pasting), and several general system libraries (to allow it to perform basic operations). Many of these shared libraries, especially commonly used ones like libc, are used by many of the programs running on a Linux system. Due to this sharing, Linux is able to use a great trick: it will load a single copy of the shared libraries into memory and use that one copy for every program that references it.

For better or worse, many tools don't care very much about this very common trick; they simply report how much memory a process uses, regardless of whether that memory is shared with other processes as well. Two programs could therefore use a large shared library and yet have its size count towards both of their memory usage totals; the library is being double-counted, which can be very misleading if you don't know what is going on.

Unfortunately, a perfect representation of process memory usage isn't easy to obtain. Not only do you need to understand how the system really works, but you need to decide how you want to deal with some hard questions. Should a shared library that is only needed for one process be counted in that process's memory usage? If a shared library is used my multiple processes, should its memory usage be evenly distributed among the different processes, or just ignored? There isn't a hard and fast rule here; you might have different answers depending on the situation you're facing. It's easy to see why ps doesn't try harder to report "correct" memory usage totals, given the ambiguity.

Seeing a process's memory map
Enough talk; let's see what the situation is with that "huge" KEdit process. To see what KEdit's memory looks like, we'll use the pmap program (with the -d flag):

Address Kbytes Mode Offset Device Mapping
08048000 40 r-x-- 0000000000000000 0fe:00000 kdeinit
08052000 4 rw--- 0000000000009000 0fe:00000 kdeinit
08053000 1164 rw--- 0000000008053000 000:00000 [ anon ]
40000000 84 r-x-- 0000000000000000 0fe:00000
40015000 8 rw--- 0000000000014000 0fe:00000
40017000 4 rw--- 0000000040017000 000:00000 [ anon ]
40018000 4 r-x-- 0000000000000000 0fe:00000
40019000 4 rw--- 0000000000000000 0fe:00000
40027000 252 r-x-- 0000000000000000 0fe:00000
40066000 20 rw--- 000000000003e000 0fe:00000
4006b000 3108 r-x-- 0000000000000000 0fe:00000
40374000 116 rw--- 0000000000309000 0fe:00000
40391000 8 rw--- 0000000040391000 000:00000 [ anon ]
40393000 2644 r-x-- 0000000000000000 0fe:00000
40628000 164 rw--- 0000000000295000 0fe:00000
40651000 4 rw--- 0000000040651000 000:00000 [ anon ]
40652000 100 r-x-- 0000000000000000 0fe:00000
4066b000 4 rw--- 0000000000019000 0fe:00000
4066c000 68 r-x-- 0000000000000000 0fe:00000
4067d000 4 rw--- 0000000000011000 0fe:00000
4067e000 4 rw--- 000000004067e000 000:00000 [ anon ]
4067f000 2148 r-x-- 0000000000000000 0fe:00000
40898000 64 rw--- 0000000000219000 0fe:00000
408a8000 8 rw--- 00000000408a8000 000:00000 [ anon ]
... (trimmed) ...
mapped: 25404K writeable/private: 2432K shared: 0K

I cut out a lot of the output; the rest is similar to what is shown. Even without the complete output, we can see some very interesting things. One important thing to note about the output is that each shared library is listed twice; once for its code segment and once for its data segment. The code segments have a mode of "r-x--", while the data is set to "rw---". The Kbytes, Mode, and Mapping columns are the only ones we will care about, as the rest are unimportant to the discussion.

If you go through the output, you will find that the lines with the largest Kbytes number are usually the code segments of the included shared libraries (the ones that start with "lib" are the shared libraries). What is great about that is that they are the ones that can be shared between processes. If you factor out all of the parts that are shared between processes, you end up with the "writeable/private" total, which is shown at the bottom of the output. This is what can be considered the incremental cost of this process, factoring out the shared libraries. Therefore, the cost to run this instance of KEdit (assuming that all of the shared libraries were already loaded) is around 2 megabytes. That is quite a different story from the 14 or 25 megabytes that ps reported.

What does it all mean?
The moral of this story is that process memory usage on Linux is a complex matter; you can't just run ps and know what is going on. This is especially true when you deal with programs that create a lot of identical children processes, like Apache. ps might report that each Apache process uses 10 megabytes of memory, when the reality might be that the marginal cost of each Apache process is 1 megabyte of memory. This information becomes critial when tuning Apache's MaxClients setting, which determines how many simultaneous requests your server can handle (although see one of my past postings for another way of increasing Apache's performance).

It also shows that it pays to stick with one desktop's software as much as possible. If you run KDE for your desktop, but mostly use Gnome applications, then you are paying a large price for a lot of redundant (but different) shared libraries. By sticking to just KDE or just Gnome apps as much as possible, you reduce your overall memory usage due to the reduced marginal memory cost of running new KDE or Gnome applications, which allows Linux to use more memory for other interesting things (like the file cache, which speeds up file accesses immensely).

PHP Benchmarking: Arrays and Iteration

As a programming department, it is always our goal to code in such a manner that makes use of methods that provide, on average, the fastest execution times. In today’s discussion, we’ll be testing the many different ways to process and interact with arrays in PHP in the first of three main areas: reading, modifying, and reconstructing.

We’ve got some ground work to establish today, so modifying and reconstructing will be saved for later. Also, as time goes on, we’ll be talking about many different areas of PHP (like functions versus objects and static methods). But, for today, arrays have been chosen as our first point of discussion because, at least in my experience, they are more than ubiquitous when it comes to developing dynamic or content-driven applications, and dealing with large amounts of data (e.g.: user lists, pages, resources of all kinds, etc).


As PHP is an interpreted language, and all machines/OS’s/configurations/processors are not created equal, please note that results will vary from machine to machine. My configuration for these tests are as follows: AMD Athlon 64 X2 Dual core 6000+, 2 GB Kingston PC-6400, Windows XP sp2.

Getting started

When benchmarking, the goal is to determine the comparable time percent difference for a given group of methods; however, it is equally important to consider the data that being used in these test, and how it is applied. It’s best to use data that would reflect a real, or as close to real-world application as possible. We’re going to be dealing with a decent size array with 500 elements, each containing a string of 1024 bytes. In the real world, this could reflect a list of pages that we’ve chosen to search through via PHP, or, more believably, a list of comments to a blog post such as this one.

I’ve created a few generic classes to help me out with the nitty-gritty of the benchmarking process. At the end of this series on PHP-specific benchmarks, I’ll walk everyone through how to use these classes for their own tests. For today though, I’m just going to gloss over this groundwork and get on to the specific tests. Here’s how I’ve setup my test data and iteration control.

define('ITERATIONS', 10000);
define('ELEMENTS' , 5000);
define('VALUE_SIZE', 1024);

$ag = new ArrayGenerator(ELEMENTS, VALUE_SIZE);

$tc = new TimeCompare();
// The associative array run tests on
$a_assoc = $ag->generate(AG_TYPE_ASSOC);
// The indexed array to run tests on
$a_index = $ag->generate(AG_TYPE_INDEX);

As an extra benefit to those who download the source code, ArrayGenerator and all forthcoming Generators provide a small, mostly trivial lesson in PHP 5’s class inheritance and interface models.

Alright, lets get on with it.

Reading arrays

Right off the top of my head, whether they be associative or indexed, I can think of four (4) different iterative structures that can be used to read through an array in PHP: for, foreach, list/each, and current/next.

For-loops have several variants. First, we will examine the traditional form, and then we will look at more optimized versions. In our test, we’re simply going to concatenate all of our array values into a separate variable. This test exemplifies the scenario where there are several pieces of data stored in array that need to be output to the page (sans formatting for us). Here’s the code for both indexed and associative arrays.


$t = '';
for ( $i=0; $i<count($a_index); $i++ ) {
$t .= $a_index[$i];


$t = '';
$keys = array_keys($a_assoc);
for ( $i=0; $i<count($a_assoc); $i++ ) {
$t .= $a_assoc[$keys[$i]];

Indexed vs. associative arrays

TestTime (%)Total time (ms)
Traditional (index)1000.000000126
Traditional (assoc)1050.000000147

The lesson here is to use index arrays when possible; however, this may only be applicable to for and other loops that use index counters. This may not apply to loops that use keys or other means of iteration; associative may be faster, if may be the same — we’ll find out later.

Now, lets be smart about our resources and move the count() statement outside of the loop-body and eliminate those extra n function calls. In general, we would write something like this.

Limit counting outside of the loop body

$limit = count($some_array);
for ( $i=0; $i<$limit; $i++ ) {
// … do something …

Now, applying this method to the previous two loops, lets retest to see what, if any improvement we get. I theory, depending on how count() behaves, this could be quite substantial.

Pre-counting the array size outside of the loop

TestTime (%)Total time (ms)
Traditional, pre-count (index)1000.000000110
Traditional, pre-count (assoc)1060.000000117
Pre-counted vs. counting as part of the loop

TestTime (%)Total time (ms)
Traditional (index)1150.000000126
Traditional (assoc)1340.000000147
Traditional, pre-count (index)1000.000000110
Traditional, pre-count (assoc)1060.000000117

Furthermore, if we switch from the post-increment operator, to the pre-increment operator, we should also see some time improvements. The difference is that in using the post-increment operator (a++), a copy of the variable is made, the original value is incremented, and then that copy is returned. With pre-increment operator (++a), the value is incremented and then returned, cutting the process down one less step. In general we would write something like this.

Pre-incrementing the loop counter

$limit = count($some_array);
for ( $i=0; $i<$limit; ++$i ) {
// … do something …

This gives us the following comparison.

Post vs. pre-incrementing

TestTime (%)Total time (ms)
Traditional, pre-count (index)1100.000000111
Traditional, pre-count (assoc)1190.000000121
Traditional, pre-count, pre-increment * (index)1000.000000101
Traditional, pre-count, pre-increment * (assoc)1120.000000114
* We’ll refer to this last version of the for-loop as an optimized for-loop (limit pre-counting, pre-incrementing) from now on.

So now that we have everyone’s favorite for-loop thoroughly smashed (like an Idaho potato), lets move on to the other three methods for iterating over an array. In these final tests, we’ll be upping our test parameters a tad.

define('ITERATIONS', 1000000);
define('ELEMENTS' , 5000);
define('VALUE_SIZE', 1024);

The beauty of these final tests is that since the subscripts aren’t used, we can use the same code and just change the array names where necessary, and since we’ve spent so much time already, lets just put all the code out there straight away.

Foreach indexed/associative

$t = '';
foreach ( $some_array as $value ) {
$t .= $value;

List/each indexed/associative

$t = '';
while ( list(,$value) = each($some_array) ) {
$t .= $value;

Current/next indexed/associative

$t = '';
$value = current($some_array);
while ( $value !== false ) {
$t .= $value;
$value = next($some_array);

So now that we have our code, lets take a look at our results.

TestTime (%)Total time (ms)
For, optimized (index)1000.0000000610
For, optimized (assoc)1130.0000000691
Foreach (index)1040.0000000637
Foreach (assoc)1040.0000000633
List/each (index)1130.0000000692
List/each (assoc)1110.0000000680
Current/next (index)1090.0000000666
Current/next (assoc)1080.0000000659
Interestingly enough, it appears that in the last three methods, associative arrays seems to out-perform indexed array by a slight amount, and although we’re talking only one to two percent differences, in the grand scheme of things, it all adds up.

So, overall, it appears that for is overall the winner of this match-up. with foreach, current/next and list/each following successively. For associative arrays, foreach appears to be the definite way to go being at least 4% faster than its closest rival, current/next. This is good news, since foreach is used so commonly in almost every piece of code in PHP that I’ve ever seen. It’s easy, and the good news is: its fast; but better than that, it appears, at least from this benchmark, that it is the right choice when dealing with associative arrays.