Deliveroo Is Killing, Not Saving The Restaurant Trade

Vaunted teal meals on wheels unicorn, Deliveroo has claimed this week that they'll generate 70,000 new jobs in the restaurant trade and help bring the industry back to its former glory thanks to huge new investment across the UK. Thank god. More bikes, fewer employment contracts, and less human interaction - just what the kangaroo ordered.

News outlets are lapping this up - hopping on the story and dishing up praise for this loveable London-based gang of food delivery aficionados.

But I take serious issue with the company for a variety of reasons.

Delivery Drivers Are Worthless

This is essentially their slogan. Delivery drivers are essentially paid less than minimum wage and are reduced to dodging traffic on peddle bikes, while moving food across our cities.

Restaurant Prices, Takeaway Quality

The food at a restaurant is only one part of the puzzle. Aside from the atmosphere of a place, the expertise of your waiting staff, and quality of interaction you can have in-person, your food should be expertly-crafted and served up fresh from the kitchen.

Deliveroo forces restaurants to shove food into containers ready to be slung across the city and yet still charge you full-whack! Insanity.

This is a race to mediocrity. A race to generalisation. If you run a restaurant slapping Deliveroo on to your offering diminishes the quality of your brand because you're subtly telling your customers that coming to your place isn't as important as convenience.

Centralisation of Resource

The idea that 70k jobs will magically appear in the industry thanks to the amazing investment by Deliveroo is pure fantasy. As a restauranteur it makes little sense to pay for a city-centre spot if you're just sending food out on bikes.

As larger chains increase their investment in delivery it makes sense to create centralised production kitchens. This will happen at scale and drive down the number of high-quality jobs generated.

A Venture and Nothing More

Venture capital is a thing. Like it or loathe it, people with money to spare will always want to see it grow.

But as with many new companies, Deliveroo's profits are almost non-existent and laughable when compared to their turnover. In-fact last year losses soared 43% and yet thanks to the fact that people will always want food to their door, they continued to grow.

If you don't make a profit, you shouldn't be in business. When the economy turns businesses that don't generate money turn too. You don't need an MBA to understand this.

But...

Customer needs will always win and the market is clearly voting for Deliveroo. Their service is slick and thanks to Amazon's hefty investment they have plenty of runway.

I don't like the business, but that won't stop it succeeding.

I just hope people don't forget to go outside once in a while.

What is your impact on the world?

Technology as a whole has always been about enhancing the human experience, elevating our capabilities to levels that are dizzying to generations past. We've achieved the most amazing feats of engineering and perhaps the most indicative metric of success is our apathy for them. We've walked on the moon and cured diseases that previously threatened to wipe out the whole of humanity. We split the fundamental building blocks of our universe every day, and mould star-dust which has been battered and forged into elements of all manner into MRI scanners and novelty keyrings alike.

We've been layering abstractions upon abstractions for millennia in order to maximise upon the learnings of our forebears. You don't need to understand the fundamentals of the combustion engine in order to drive, nor should you. This really started to ramp up with the agricultural revolution and has gaining velocity ever since. Now it's impractical to understand every layer in its entirety so we tend to specialise in a small subset of abstractions.

When you buy a new phone you don't need to know about, let alone care about the precise origin of the device. It doesn't matter to you right now that the battery's components were forged millions of years ago in the molten core of the Earth before rising up to the crust to be later uncovered by our ancestors.

But the cost of each item shouldn't be measured in only an environmental context. We should include the human element of each interaction with the world we partake in.

And yet, we're outraged by the continual news beat of poorly-treated warehouse staff - working in conditions that echo the horrors of the slave trade in a pre-human-rights era. But why are we shocked? The strive to ever-enhance the end-user experience and extract more wealth from consumers, therefore scaling businesses exorbitantly has led to a need to divorce our customers from the true cost of production.

We've obscured the human effort that is necessary to fulfil the promise of modern convenience and keep the cost down.

In our drive for convenience and desire for lower costs, we've hidden the impact of our purchases behind the simplest of indicators - cold, hard, cash. No matter that thousands of warehouse workers collapse every year at the biggest retailors. Who cares about the suicides at electronics factories in some far-flung destination across the globe? We trade lower costs for lower standards of production in a don't ask, don't tell approach to manufacture.

We've been on a quest for ever-smaller and more light-weight devices for the best part of three decades now, and we're at the point of disregarding ingenuity in manufacturing, allowing cowboy techniques of just gluing components together to fill the void.

The most cutting edge phones are not designed to be repaired. The most cutting edge companies wouldn't allow it anyway. Take the Airpods for instance. The tiny lithium batteries within each pod will fail after 18 months, no longer able to hold a charge. Such is their construction that they can't be recycled - they're tiny, hugely complex, glue-filled internals will be left to rot in the core of the Earth for a millennia or more (providing they don't explode and set free the carbon that's been trapped in their surrounding land-fill for billions of years.)

In more recent times we've made whole industries in building brands - abstractions of groups of people working together. No longer do we talk about the human effort required to deliver something, we just talk about the features delivered by a company. A good example of this what Jaron Lanier outlines in his book, You Are Not a Gadget - chess-grandmaster-beating-software. We talk about the incredible skills of computers (usually) without considering the human effort that was required in order to build them.

Computers don't play chess well because they are ingenious automatons. They do so because their programmers have constructed powerful models of the world that, when combined with the incredible hardware that has the fingerprints of yet more human endeavour, can beat chess champions. Pretending that methods of computing such as Artificial Intelligence and Speech Recognition are emergent factors of computing power is as disingenuous as it is missing the point.

This great shift to obscuring impact is an anti-humanistic and anti-environmentalist assault on the world. If a product team can okay a device that will be totally obsolete, wholly unrecyclable and highly dangerous if disposed of incorrectly what does that say? If they disregard their empathy, relying on faceless manufacture ring brands to abstract the human torment that is necessitated by their supply chain, how do we reconcile that.

Ultimately the market will respond to consumer demand. There's no point in attempting to impose some additional rules that we wish manufactures would follow, the consumers need to vote with their purchases.

Take the outcry at single-use plastics. As consumers have adapted to demand more contentious applications of plastics, so too should we become more sophisticated, empathetic, and critical consumers of modern goods.

Our planet, our home is still reeling from the greatest advancement in technology in human history - the industrial revolution. The artefacts of this era have nudged the global thermometer up and upset a great many natural processes. We unlocked energy in order to propel us forward, but unleashed carbon at the same time - now the biggest forewarning of what is to come. The generations that follow will doubtless lament the artefacts of our era - complex, unrecyclable consumables that won't degrade for thousands of years.

How do we improve?

By asking not how much an item costs, but rather, what is your impact on the world?

I'm optimistic that our process to recycle and reclaim the fundamental elements of much of our disposable consumerism will advance. The future will be brighter, but we shouldn't allow this optimism about tomorrow foster apathy today.

Pass data from modal view back to parent in the iOS SDK

I recently came across a problem using the newest version of XCode (4.4), in which we can use the fantastic Storyboards feature. The question was simple - how do I present a modal view, ask the user for some data, and then return that to the parent view? In the past it was a pretty simple matter, but for me passing it back along via a navigation controller turned out to be slightly more complex that I predicted. Luckily the code to fix this issue is pretty concise and very easy to understand. Let's take a look.

Here's what we're dealing with in terms of the view setup:

First thing's first, give your modal segue an identifier like so:

Make this descriptive, for my app the modal view is used to add an order. And now we need to get our hands dirty and start coding. Jump to the modal view's header file and add this property:

@property (nonatomic, assign) id delegate;

This will allow us to assign a delegate to our view, in our case will allow the parent view to tell the modal view that it is the delegate, and therefore all data should be passed back to our parent. Note: Don't forget to @synthesize the above property in your .m file.

Now let's imagine you want to pass back the contents of an input box when a button is tapped, I'll assume you've setup a method called didFinishEnteringData:sender - a fairly common looking method name as generated by XCode. Here's the code you would use inside of this method:

- (IBAction)didFinishEnteringData:(id)sender {
    [self.delegate setInput:myInput.text];
    [self dismissModalViewControllerAnimated:YES];
}

And voilà! We are now talking to our delegate. But hang on, we need to have our parent assign itself as the delegate before the modal view is presented. And for that we need to jump to our parent view - an excellent point here is that the method applies to the storyboard configured method of presenting a modal view, rather than a programmatic approach. Here's the code:

-(void) prepareForSegue:(UIStoryboardSegue *)segue sender:(id)sender {    
    if([segue.identifier isEqualToString:@"NewOrder"]){

        AddOrdersViewController *vc = (AddOrdersViewController *)[[[segue destinationViewController] viewControllers] objectAtIndex:0];
        
        [vc setDelegate:self];

    }
}

Notice the use of the identifier we configured before. Also note the modal view's class name, and the need to #import the header file of the modal view's class.

And that is how to simply access the parent to send data back from a modal view!

Implement the WordPress Geolocation Plugin

For a while now WordPress has had apps across the mobile spectrum, and a great feature of these is geolocation - great, but who wants to know where I'm blogging from right? Well recently I had to build a blog for someone travelling around the world, and it quickly became apparent that using the data saved by the app on the site would be a great feature. In this tutorial we'll see the basic implementation of adding the plugin, but more importantly look at how we can then plot a route across the globe using data from every post. So let's get started!

First off you'll need to go and grab the Geolocation plugin and install it on your site. The default behaviour of this plugin is to insert a link at the bottom of single posts and show a map to users when they hover over it. But what if we want to show the map by default? And what about using all that geolocation data from each post? Well the first request is relatively simple - in fact it's just a change of CSS. Here's the alteration to style.css in the goelocation folder:

#map { background: #fff; border: solid 1px #999; padding: 20px; visibility: hidden; }
.home #map { display: none; }

The second rule is useful if you use the the_content() on your home page, this is because only 1 map is actually used for every post meaning that otherwise you'll just have a big empty div.

Plotting your posts on a map

Here's the exciting bit, to begin with we'll look at creating a route out of post data, and displaying it on a nice big map. Thankfully the WordPress app doesn't use anything big, scary, or evil to store its data, it just stores it as a custom variable! So accessing the data is easy as pie! Now in this case I'm assuming that you're whacking the PHP code in something like map.php - a template for a page, and that the JavaScript coming later will be in footer.php - this is important because the JS will be using a PHP variable - namely the coordinates to plot. If you have a different structure, you'll need to find some way of getting this data from PHP, either via an AJAX request, or clever script positioning. Let's take a look at how we'd go about getting all of this data into our PHP backend, before utilising it with the Google Maps JavaScript API:

<?php 
    $points = '';
    query_posts('posts_per_page=500'); 

    while ( have_posts() ) : the_post(); 
    
        $points .= '(' . get_post_meta($post->ID, 'geo_latitude', true) . ',' . get_post_meta($post->ID, 'geo_longitude', true) . '),';  

    endwhile; 
    
    $points = substr($points, 1, -2); // Remove the initial '(' and final '),'
?>

<div id="canvas"></div>

So above we have the standard WordPress loop that cycles through the 500 most recent posts (courtesy of query_posts()) - this should all be familiar. Inside the loop we keep adding to the $points variable in the format (latitude_1, longitude_1), (latitude_2, longitude_2) and so on. We use the handy get_post_meta() function (here's the reference), to get the coordinates, and we end by removing the first bracket and last bracket and comma - we do this because when we switch to JS we need a clean array. Finally we have a div with the ID "canvas" - this is where we'll put our map - so feel free to style this in your CSS, I added the following to the theme's style.css:

#canvas {
    height:600px;
    width: 96%;
    margin-left: 2%;
}

And now we're ready to move into the crazy realm of JavaScript!

Luckily for us the Geolocation plugin already takes care of adding the Google Maps API v3, so we can get straight into using it with our site! But before we get into our JavaScript we're going to be good coders and only deliver our code to users on the map page, so we'll use yet another handy WordPress function to detect when we're on the map.php template page. Go ahead and whack this into footer.php:

<?php  if ( is_page_template('map.php') ) : ?>

     /* JS goes here... */

<?php endif; ?>

Now let's get to the really meaty code. The code below uses 1 jQuery call, and it's just to check that the DOM is fully loaded, so you can use any equivalent function or whack the code into onload="" in the body tag. First up we'll look at getting the data from PHP and making it usable, put the following code within <script> tags:

function init(){
    var p = "<?php global $points; echo $points; ?>", mapCoords = null;
	
    p = p.split("),(");
	
    for(var x = 0; x < p.length; x++){
        p[x] = p[x].split(",");
        if(mapCoords == null){
            mapCoords = [new google.maps.LatLng(p[x][0], p[x][1])];
        }else {
            mapCoords.push(new google.maps.LatLng(p[x][0], p[x][1]));
        }
    }	
	
    newMap('canvas', mapCoords);
}


$(function(){ init(); });

So the function above begins by taking the data we just extracted using PHP and puts it into a variable called p, we also define one called mapCoords. Then we turn p into an array by splitting up each pair of coordinates, before jumping into a for loop. If you've not used the Google Maps API before this may look a little daunting, but it's fairly simple once you get started. First off we set up our loop to cycle though every set of coordinates in our array, and once inside we split our coordinates into longitude and latitude and store that array in p. Then we check to see if mapCoords is empty, if it is we use square brackets to create the array, otherwise that's the only difference. Then we add a new value to the mapCoords array - this value is a special object provided by the API for processing points on the map, and we just have to pass it the relevant values for the longitude and latitude. Once the loop is complete, we call a mysterious function named newMap(), and pass it the ID of our map canvas, as well as our newly created route coordinates. Lets take a look at how we implement this new function.

function newMap(id, mapCoords){
    var centre = (new google.maps.LatLng(51.44031275716014, 0.3955078125)),
	zoomLevel = 6,
        route,
        myOptions,
        map;
	
    route = new google.maps.Polyline({
        path: mapCoords,
        strokeColor: "#2324e4",
        strokeOpacity: .70,
        strokeWeight: 7,
        editable: false
    });
		    	
    myOptions = {
        center: centre,
        zoom: zoomLevel,
        mapTypeId: google.maps.MapTypeId.ROADMAP
    };
		
    map = new google.maps.Map(document.getElementById(id), myOptions);
    route.setMap(map);
}

First up we define a few variables: the map centre - as it appears to the user (here it's somewhere between the UK and France), zoom level - 6 will zoom pretty far out (16 is street level), and the route - we'll add to this soon. Then we make use of our route variable - this is a line that will be drawn on our map and will represent the route between posts. There are a number of options here, but the most important is the 'path' - we'll set it equal to the coordinates passed to our function. Next up we define some options for our map and put them into myOptions. Here we set the centre and zoom level - as defined above, as well as the type of map we want to display - here we've created a road-map. Finally we create our map, and tell our route that it needs to draw itself on that map.

Up to this point the code we've written will draw a blue line on your map, so if that's all you need you're done! But if you want to add your posts to the map as markers read on!

Plot Posts with Pins

Now it might be the case that you don't want a route plotted, or maybe you want a route with pins indicating where you've posted from - and once you've implemented the back-end above its remarkably simple. The only difference is that because we'll want to show the user an info window we'll need to store the names, dates, and links for our posts. This just requires a simple modification of the backend:

<?php 
    $points = '';
    $thePosts = '';

    query_posts('posts_per_page=500'); 

    while ( have_posts() ) : the_post(); 
    
        $points .= '(' . get_post_meta($post->ID, 'geo_latitude', true) . ',' . get_post_meta($post->ID, 'geo_longitude', true) . '),'; 
        $thePosts .= '(' . get_the_title() . '|' . get_permalink() . '|' . get_the_time() . '),';
    endwhile; 
    
    $points = substr($points, 1, -2); // Remove the initial '(' and final '),'
    $thePosts = substr($thePosts, 1, -2);
?>

<div id="canvas"></div>

So the only difference here is that we've now got another variable named $thePosts - which holds the title, permalink, and time posted for each post. We'll do essentially the same to this when we get into the JavaScript, and then we can use the data on the map. Notice I've used a pipe character (|) as the delimiter as it's common to have commas in titles. So lets take a look at the init() function in our JavaScript:

function init(){
    var p = "<?php global $points; echo $points; ?>", mapCoords;
	
    p = p.split("),(");
	
    for(var x = 0; x < p.length; x++){
	p[x] = p[x].split(",");
	if(mapCoords == null){
		mapCoords = [new google.maps.LatLng(p[x][0], p[x][1])];
	}else {
		mapCoords.push(new google.maps.LatLng(p[x][0], p[x][1]));
	}
    }	
	
    /* Create posts array */
    var posts = "<?php global $thePosts; echo $thePosts; ?>";
    posts = posts.split('),(');
	
    for(var x = 0; x < posts.length; x++){
	posts[x] = posts[x].split('|');
    }
	
    newMap('canvas', mapCoords, posts);
}

Here we've just added another variable and a for loop - notice this is above the newMap() call. We'll also alter this function to accept the posts variable. Now that we've got our variables set up, we can get into changing up our hefty newMap() function. But before we get to that we need to define a global variable that we'll use in the function, so go ahead and make a global variable like so:

var infowindow = new google.maps.InfoWindow();

This variable will be used to display an information window when the user clicks on a point, and we can utilise the handy API once again to do this. Now we can get into the juicy function to create our map:

function newMap(id, mapCoords, posts){
    var centre = (new google.maps.LatLng(51.44031275716014, 0.3955078125)),
	zoomLevel = 6,
        route,
        myOptions,
        map;

    route = new google.maps.Polyline({
        path: mapCoords,
        strokeColor: "#2324e4",
        strokeOpacity: .70,
        strokeWeight: 7,
        editable: false
    });
		    
    myOptions = {
        center: centre,
        zoom: zoomLevel,
        mapTypeId: google.maps.MapTypeId.ROADMAP
    };

    map = new google.maps.Map(document.getElementById(id), myOptions);
    route.setMap(map);
    
    // New code starts here
	
    function getInfoWindowEvent(marker, x) {
	infowindow.close()
	infowindow.setContent('<div class="infowindow"><a href="'+posts[x][1]+'"><strong>'+posts[x][0]+'</strong><br>'+posts[x][2]+'</a></div>');
	infowindow.open(map, marker);
    }
    
    var markers = [];
    
    for(var x = 0; x < mapCoords.length; x++){
        markers[x] = new google.maps.Marker({    
	    position: mapCoords[x],    
	    map: map,
	    icon: 'http://example.com/images/pin.png' // Remove this to use the default pin
	});
	    
	google.maps.event.addListener(markers[x], 'click', (function(x) {
	    return function(){ 
		getInfoWindowEvent(markers[x], x);
	    }
	})(x));			
    }  
}

Notice the comment about half way through - I'll begin explaining the code from there onwards.

First up we create ourselves a function called getInfoWindowEvent() - this will be used to move our info-window that we defined earlier around the map and to put our post content into it each time a point is clicked. The HTML inside the setContent() function is entirely up to you, so feel free to play with the styling and organisation of the code. Next up we create a new array called markers[] - this will hold every marker for the map. Then we have a for loop to iterate over every set of map coordinates that we have. Inside the loop we first create a new marker - the options here should be fairly obvious, but the icon one is entirely optional, remove it to use the default, well-known pin icon, otherwise supply a valid URL. Once we've got our array of markers we need to add event listeners to get ready for click events. The function we use simply calls the getInfoWindowEvent() function with the appropriate variables, namely the marker that has been clicked, and the index of that marker.

As a little side-note, if you want your own, custom pin on single post pages for the map, it's simply a matter of replacing img/wp-pin.png, and for greater customisation there's also wp_pin_shadow.png.

And that's it! You should now have a beautiful, dynamic map displaying all your posts - perfect for travel blogs!

Quick Update

So I just thought I better update the ol' blog - not posted since 2011 - yikes! I'm still here, and I've not given up on it! 🙂

I'm busy working on a large-scale project at the moment, so I've been consumed with work, but I will be posting a bunch of stuff relating to the project soon - there is much to blog about in the way of in-depth tutorials into all kinds of things from complex JavaScript, to even more complex PHP. I've got ideas for articles on not only the techniques, but also approaches to things like refactoring and testing code and websites as a whole. So stay tuned, and I'll be with you in the not too distant future!

Create a dynamic Twitter-search feed

For years the ancient (and some would say instinctive) art of tweeting was restricted to the avian species of the sky, but recently a service has come about that allows us humans to partake in the practise. You might have heard of this little company, they call themselves Twitter, and have provided an extensive API for us web-folk to play with their service. At this point you might be wondering what the heck I'm going on about, so how's this? We'll be creating a dynamic Twitter feed based on a search term, that automatically updates without the user having to refresh the page, while ensuring we don't overload our own servers with long-polling (more on that in a mo). We'll even take a look at allowing only a certain set of users to appear in our steam, to avoid spammers. Let's get started!

A quick bit of background about the technical side of this before we get going though. For us to be able to update our stream constantly we need AJAX to poll (or make a request) to a server, to ask for any new tweets. Now when I first began playing around I had a page on my own server that was requested every 10 seconds, that would then grab, parse, and output any new tweets, and return it to my stream. But, and it's a big but, this approach can devastate your servers if you're not careful. If you've never delved into long-polling before you're lucky! But take it from me, having 100 users requesting a page on your server every 10 seconds, each, is a really bad idea. So in this tutorial we'll look at how we can use JavaScript to parse and output results directly from Twitter's servers.

To begin we'll look at how we can GP+O (Get, Parse, and Output) our tweets with PHP. Here's our code:

function searchTwitter($search) {
    $url = 'http://search.twitter.com/search.atom?rpp=300&q='.urlencode($search) ;
    $ch = curl_init($url);
    curl_setopt ($ch, CURLOPT_RETURNTRANSFER, TRUE);
    $xml = curl_exec ($ch);
    curl_close ($ch);

    $result = new SimpleXMLElement($xml);
    
    foreach ($result->entry as $entry) {
    	$author = trim($entry->author->name);
		$name = explode(' (', $entry->author->name);
        $content = trim($entry->title);
        $time = @strtotime($entry->published);
        $id = $entry->id;

        echo "<li data-id=\"".str_replace('tag:search.twitter.com,2005:', '', $id)."\">
        		<img src=\"http://api.twitter.com/1/users/profile_image/$name[0]?size=normal\" />
        		<div class=\"content\">
        			<span class=\"name\">".substr($name[1], 0, -1).":</span><br> 
        			<span class=\"tweet\">".$content."</span><br>  
        			<span class=\"time\">Posted ".gmdate('j/n/y g:i a',$time)."</span>
        		</div>
        	</li>";
    }
}
	
searchTwitter('myquery');

Because we're good developers we're encapsulating our code into a handy, reusable function - aren't we good?! Let's look at what we're doing here. First off we use PHP's cURL library to make a request to Twitter. There are a bunch of URL variables avalible, all of which can be found over at the Twitter API docs, the only ones we're concerned with are "rpp" - results-per-page, and "q" - query. I've set the results-per-page pretty high, but the function of this variable is fairly self-explanatory, as for the query variable, notice the use of the handy urlencode() function, which will take care of encoding things like hash-tags and spaces in our queries. Phew! That the first line done! The next few lines simply request the page, and shove the resulting data into the $xml variable.

We then use the excellent SimpleXML parser to translate our raw XML data into a useful variable. From then onwards we use a foreach() loop to go through every tweet in our list. If you'd like to know what variables are contained in each entry, just whack a print_r() in the loop. For our purposes we only need to access a few parts of each entry. Here's a list of our variables, and what they do:

  • $author: The user's name in the format "[Full name] ([User name])"
  • $name: An array of the user's name in the form [0]=>"Full name", [1]=>"Username )" - yes that's a bracket on the end
  • $content: The tweet itself, all tidied up using trim()
  • $time: A useful representation of when the tweet was posted
  • $id: The unique tweet ID - we'll use this later when we request updates

We can use these variables to then output a list that looks very much like Twitter itself, with the user's profile image to the left and data to the right. Our code outputs list items with nicely formatted dates, and here's where you might want to update the code to reflect your markup. And that's it for the PHP! However, if you want to have a stream of tweets only from approved users, you would want the following, updated code:

function searchTwitter($search) {
    $url = 'http://search.twitter.com/search.atom?rpp=300&q='.urlencode($search) ;
    $ch = curl_init($url);
    curl_setopt ($ch, CURLOPT_RETURNTRANSFER, TRUE);
    $xml = curl_exec ($ch);
    curl_close ($ch);

    $result = new SimpleXMLElement($xml);
    
    $hidden = array();
    
    foreach ($result->entry as $entry) {
    	$author = trim($entry->author->name);
		$name = explode(' (', $entry->author->name);
		
    	if(in_array(strtolower($name[0]), array('user1', 'user2'))){
	        $content = trim($entry->title);
	        $time = @strtotime($entry->published);
	        $id = $entry->id;
	        echo "<li data-id=\"".str_replace('tag:search.twitter.com,2005:', '', $id)."\">
	        		<img src=\"http://api.twitter.com/1/users/profile_image/$name[0]?size=normal\" />
	        		<div class=\"content\">
	        			<span class=\"name\">".substr($name[1], 0, -1).":</span><br> 
	        			<span class=\"tweet\">".$content."</span><br>  
	        			<span class=\"time\">Posted ".gmdate('j/n/y g:i a',$time)."</span>
	        		</div>
	        	</li>";
	       
        }else { array_push($hidden, $name[0]); }
    }
    
	echo "<!-- Tweets from: ";
	for($x = 0; $x < count($hidden); $x++){
		echo $hidden[$x] . ', ';
	}
	echo " have been hidden -->";
}
	
searchTwitter('myquery');

Notice the addition of the $hidden array - which will allow us to keep track of any blocked tweets. Stepping inside our loop you'll notice the addition of an if statement that checks to see if the entry was posted by a user in an array of approved tweeters. If it is, then it continues to output the tweet, otherwise it adds it to the array of blocked users. And just for fun, we output a HTML comment at the end to let us know if any users were blocked.

Now that we've got our PHP sorted, let's take a look at how we can use JavaScript to make this bad-boy dynamic! Before we write any of our own code, we need to borrow for some excellent chaps for some utility functions - if you want to format the date any differently to how Twitter returns it by default, you'll want to go and grab the JavaScript Date.format code, and if you want to only allow tweets from pre-approved users, you'll want a translation of PHP's in_array() function. Once you've got that code, we can go straight ahead and use:

var lastUpdate;

function update(){
	$.ajax({
		url: "http://search.twitter.com/search.json?q=myquery&since_id="+lastUpdate,
		dataType: "jsonp",
		success : function(data){
			var tweets = data.results;
			tweets.reverse();
			for(var x = 0; x < tweets.length; x++){
				var date = new Date(tweets[x].created_at);
				date = date.format('d/m/y g:ia');
				$('.stream').prepend('<li data-id="'+tweets[x].id_str+'" class="new hidden"><img src="'+tweets[x].profile_image_url+'" /><div class="content"><span class="name">'+tweets[x].from_user_name+'</span><br><span class="tweet">'+tweets[x].text+'</span><br><span class="time">Posted '+date+'</span></div></li>');
				$('.new').slideDown().removeClass('new');			
			}
			
			if(tweets.length > 0){
				lastUpdate = tweets[tweets.length - 1].id_str;
			}
			
			setTimeout(function(){ update(); }, 10000);
		}
	});			
}

lastUpdate = $('.stream li:first').data('id');
setTimeout(function(){update();}, 10000);

So in the code above we've created a function that makes an AJAX request to Twitter to ask for new tweets. We use the most recent tweet's ID in our request, using the parameter "since_id" - stored in the lastUpdate variable (this is first assigned a value at the bottom of the code, and extracts the ID from the first list item in the ".stream" list). Notice the URL features a ".json" file extension - perfect JavaScript goodness, that will allow us to play around with the data. If the request is a success we then go about adding any new tweets to our stream.

We first create the tweets variable, and reverse it - we do this because they are returned in reverse-chronological order (most recent first), and we want to output them in chronological order. We then loop through all the new tweets, formatting, prepending, and sliding-down one-by-one. Once the loop is finished we check to see if we actually had any new tweets, and if we did, we update our lastUpdate variable to reflect the most recent tweet in our stream. Finally we use the setTimeout() function to call our function again in 10 seconds - essentially mimicking real-time updates. And that's our code!

If you want to allow updates from only approved users, here's what the JavaScript looks like:

var lastUpdate;
var allowedNames = ['user1','user2'];

function update(){
	$.ajax({
		url: "http://search.twitter.com/search.json?q=myquery&since_id="+lastUpdate,
		dataType: "jsonp",
		success : function(data){
			var tweets = data.results;
			tweets.reverse();
			for(var x = 0; x < tweets.length; x++){
				if(in_array(tweets[x].from_user, allowedNames)){
					var date = new Date(tweets[x].created_at);
					date = date.format('d/m/y g:ia');
					$('.stream').prepend('<li data-id="'+tweets[x].id_str+'" class="new hidden"><img src="'+tweets[x].profile_image_url+'" /><div class="content"><span class="name">'+tweets[x].from_user_name+'</span><br><span class="tweet">'+tweets[x].text+'</span><br><span class="time">Posted '+date+'</span></div></li>');
					$('.new').slideDown().removeClass('new');
				}else {
					$('.stream').prepend('<!-- Blocked Tweet from: '+tweets[x].from_user+' -->');
				}
			}
			
			if(tweets.length > 0){
				lastUpdate = tweets[tweets.length - 1].id_str;
			}
			
			setTimeout(function(){ update(); }, 10000);
		}
	});		
}

lastUpdate = $('.stream li:first').data('id');
setTimeout(function(){update();}, 10000);

And that is how to create a dynamic Twitter stream using PHP and JavaScript!

Create a bullet-proof contact form

Note: This article has been marked for a quality review and will soon be updated.

Contact pages are usually one of the basic building blocks of any website, and while many simple feature an email address for spam bots to pick up and use, or even a handy 'mailto' link, the best ones feature a proper contact form. To make a bulletproof one we're going to need some thick glass and a riot shield PHP and JavaScript. Having those bad-boys on our side will ensure we can create a beautiful AJAX-enabled means of contact for our users. We also need to ensure our form will work on the rare occasion that a user has JavaScript turned-off *gasp* - I know, it's a scary thought, but I'm sure we'll figure something out!

So we'll start with some simple HTML to set out our fields, the following code is what we'll be working with:

<form id="contact" class="right" method="post" action="mail.php">
	<h3 class="hidden success"><br/>Message sent!</h3>
	
	<label>Name: <span class="warning right"></span>
		<input type="text" name="name" />
	</label>
	
	<label>Email: <span class="warning right"></span>
		<input type="text" name="email" />
	</label>

	<label>Message: <span class="warning right"></span>
		<textarea name="message"></textarea>
	</label>
	<input class="right" type="submit" value="Send" />
</form>

Aside from the obvious, the form features a few extra elements - namely the .warning elements - we'll see what they're for in a moment. I also assume that you have the class of .right set up to float elements to the right; if now, then you'll need to float the affected elements individually in your CSS. Right, that's the HTML sorted, lets take a look at the JavaScript for this puppy.

function validateEmail(email){ 
	 var re = /^(([^<>()[\]\\.,;:\s@\"]+(\.[^<>()[\]\\.,;:\s@\"]+)*)|(\".+\"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\])|(([a-zA-Z\-0-9]+\.)+[a-zA-Z]{2,}))$/ 
	 return email.match(re); 
}

$('#contact').submit(function(){
	var name, email, message, errors;
	
	errors = 0;
	name = $('input[name=name]').val();
	email = $('input[name=email]').val();
	message = $('textarea[name=message]').val();
	
	$('.warning').text('');
	
	if(name==''){ $('input[name=name]').siblings('.warning').text('This field is required!'); errors++; }
	if(email==''){ $('input[name=email]').siblings('.warning').text('This field is required!'); errors++; }
	if(message==''){ $('textarea[name=message]').siblings('.warning').text('This field is required!'); errors++; }
	if(!validateEmail(email)){ $('input[name=email]').siblings('.warning').text('Please enter a valid email!'); errors++; }
	
	if(errors==0){
		var dataString = $(this).serialize() + '&js=true';
	
		$.ajax({
			url: 'contact-post.php',
			data: dataString,
			type: 'POST',
			success: function(data){
				$('form label, form input[type=submit]').slideUp(500, function(){
					$('form .success').hide().removeClass('hidden').slideDown(500);
				});
			}
		});
	}
	
	return false;
});

Now for the moment we'll ignore the validateEmail() function, and take a look at the form submission code. First off we set up some variables for the values in our form, this prevents us having to use longer code snippets and querying the DOM too much. Once we've got them set up we give the user the benefit of the doubt and remove any warning that may have previously been set by removing any text from the elements with the class of .warning. And then we validate the fields. The first three 'if' statements simply check that the user has entered a value in each field - if they haven't we tell them so, and increment our errors variable by 1. The last 'if' statement uses the validateEmail() function that we set earlier. There's no reason to worry if you don't understand how that function works, Regular Expressions are a world of their own. All we need to know, is that it tells us if the user has entered a valid email address.

Following our rather basic validation checks we then test to see if the form has passed - because we've been using our errors variable all along, if it's set to '0' we can rest assured nothing has gone wrong. If that's the case we serialise the form, and use a simple AJAX request to submit the data to our server. If our server is happy, we show our users a success message. Now all we need to do is set up our contact-post.php file, like so:

function redirect($hash){
	if($hash!='success'){
		echo 'Invalid ' . $hash;
	}else {
		echo 'Message sent!';
		echo '<meta http-equiv="refresh" content="0;url=http://example.com/">';
	}
	die();
}

if($_POST){
	if(empty($_POST['js'])){
		// Validate info here
		if(empty($_POST['name'])){ redirect('name'); }
		if(empty($_POST['email'])){ redirect('email'); }
		if(empty($_POST['message'])){ redirect('message');  }
		if(filter_var($_POST['email'], FILTER_VALIDATE_EMAIL)){ redirect('email'); }
	}
	
	$sendto = 'you@example.com';

	$name = $_POST['name'];
	$email = $_POST['email'];
	$phone = $_POST['phone'];
	$message = $_POST['message'];
	
	$to = $sendto;
	$subject = "[MySiteName] Message";
	
	$message = "
	<html>
	<head>
	<title>Contact form Submission</title>
	</head>
	<body>
	<p style='font-family:Arial, Helvetica, sans-serif; color:black;'>The following was sent by <strong>".$name." (Email: ".$email."):</strong></p>
	<p style='font-family:Arial, Helvetica, sans-serif; color:black; font-size:16px;'>".nl2br(stripslashes($message))."</p>
	
	<p style='font-family:Arial, Helvetica, sans-serif; color:black;'>(Sent on: ".gmdate('d\/m\/y').")</p>
	</body>
	</html>
	";
	
	$headers = "MIME-Version: 1.0" . "\r\n";
	$headers .= "Content-type:text/html;charset=iso-8859-1" . "\r\n";
	$headers .= 'From: ' . $name;
	
	mail($to,$subject,$message,$headers);
	
	redirect('success');
}else {
        redirect('submission - no data entered!');
}

The function at the top will be used when we want to give the user some feedback - if they entered all the correct info, they'll be given a success message, and then redirected to your site in a timely fashion, otherwise they will be told what is wrong with their submission. Then we get to the guts of the script - we first make sure there is some data to play with - if there isn't we tell the user off, and if there is we go on to see if the user has JavaScript running. You may have noticed the little variable 'js' that we added to the serialised form back in our JavaScript - that is our way of finding out if the form is being submitted via an AJAX request in JavaScript - if the variable is empty, we let PHP validate our form's content - this is essentially identical to the JS code we used previously.

We then set up a really basic email, and send it to the address in the $sendto variable. Finally we redirect the user to the site. As a side note, do remember that this page will only be seen by users without JavaScript turned on.

And that's it! You now have a swanky contact form that, for all intents and purposes, is bullet-proof, but having said that, I wouldn't actually test that theory with a gun of any kind...

Prevent broken animations in jQuery

Note: This article has been marked for a quality review and will soon be updated.

I recently posted an article on Animating a Site's Loading using jQuery - a handy technique to spice up any site. But an issue that became apparent using this technique, along with some other animation methods, is that sometimes user interaction can prevent loading and other animations from finishing correctly, and possibly lead to pages that look broken because of it. This is the sort of thing that only user-testing will usually uncover, as many web-designers simply sit back and watch the animations, before actually interacting with the site. To explain what I mean I'll show you a basic example of navigation animations - and the one that prompted this article.

$('nav li').each(function(index){
    $(this).slideDown(200*(index+1));
});

$('nav li').hover(function(){
    $(this).stop().animate({'paddingTop':'10px'},200);
});

Now the code above works perfectly fine - it slides-down each list element in my navigation panel, and whenever a user rolls over one with their mouse the list element moves down by 10 pixels. Great, but we have a slight problem; namely, the stop() function. The code uses it to prevent an animation build up if a user quickly moves their mouse over multiple list elements for example, and for that it works very well. But because we are using a loading animation, if a user hovers over a list element before it has completed the the slide-down animation, the list element will be stuck in limbo, or rather, it won't finish sliding down.

This is caused by the fact that the hover() is bound to each list element at the same time that the animations are being executed. To solve this issue we need to come up with a way of waiting for the animations to complete before binding any events to our elements. One way of doing this might be to add a class to each element as it finishes it's loading, and then checking to see if the class exists before performing any events. But that approach will quickly clog up both your JavaScript, and HTML. To avoid messy code, we should instead only bind such events once the animation has completed.

To do that, we just make a simple alteration, like so:

$('nav li').each(function(index){
    $(this).slideDown(200*(index+1), function(){
        $(this).hover(function(){
            $(this).stop().animate({'paddingTop':'10px'},200);
        });
    });
});

The code above uses the callback function of the slideDown() function to wait until the animation completes. We then bind the hover event to each element individually, and we have ourselves a much better setup to prevent our sites breaking in the hands of hover-happy users.

Importing hand-drawn art into Photoshop

Using Photoshop to design websites it great - it allows for pixel-perfect designs that can then be styled to your hearts-content using all the powerful tools included in the application. But sometimes it's nice to add a human touch, and have hand-drawn art in our designs. Now I personally don't do this all too often, but the other day when a client asked for a design that I simply couldn't create in Photoshop, I reached for the good-old pencil and paper to draw a fish sat in a chair (this might seem somewhat odd out of context, but there you go), and some nice lettering.

So what are our options? For me I wanted to draw the art, then colour it in with Photoshop, and add it to the site, and the distinct lack of thorough tutorials I found didn't do a great deal to help, so I thought I'd document my process here.

I started by sitting down and actually drawing the final product, and after an hour or two of trying to make a realistic fish I scanned the bad boy in, and came up with the following:

Hand-drawn fish

Great, I was happy with the simple design, and now I needed to 'vectorise' it. And here is where I hit my brick wall. Illustrator wouldn't play ball - I tried a billion different settings for the live trace tool, and it simply failed miserably. I concluded this was down to the opacity of the drawing, so I traced it with a 0.3mm fine-liner to make it stand out a little more. Here's what I ended up with:

Fish fineliner

Not bad, but still Illustrator just looked at me blankly. And that's when I found out about VectorMagic - an online tool to convert hand-drawn art to vectors. Now the service is free for the first 2 images you upload, and then there's a subscription of $8 a month, or you can buy the desktop version. So while it's not a complete solution, it works damn well! (I eventually forked out for the desktop client and I can honestly say it works every time). Now if you don't fancy being locked into a service like that the only other option is to manually trace (using the pen tool) in Photoshop - I know, this isn't really a solution, and that's why I seriously suggest you give VectorMagic a go.

So, having put my image through the VM process I got the following:

Traced Fish

Yay! It's starting to look a lot better. From there I began to colour in my creation using the fill tool, and the brush. After about 10 minutes I had:

Fish coloured-in

And then I added highlights and shadows to make it look more realistic - using a layer with the blend mode set to multiply, and a small brush using black and white for shadows and highlights respectively. The final fish took it's form:

Fish final

And that's now my process for importing vector art.

Hide an element with jQuery whilst scrolling

Note: This article has been marked for a quality review and will soon be updated.

Today I came across an interesting problem that no amount of Google searches could solve. So I had to knuckle down and come up with a solution - and surprisingly it was much more simple that it had first appeared.

So the problem is hiding an element when a user is scrolling - it might be a menu, a header, or a picture of my cat, Sophie. Either way we can't just use the simple delay() function in jQuery to help us out - we need to use some raw JavaScript. To accomplish this we will create a variable into which we will place the current scroll amount of the page, we will then wait for a period of time, and check if that scroll amount has changed. If it has we will leave the element hidden, otherwise we'll show it to the user again.

And here's the rather simple code to do just that:

$(document).scroll(function(){
	$('section').fadeOut();
	
	var scrollA = $('body').scrollTop();
	
	setTimeout(function(){
		if(scrollA == $('body').scrollTop()){
			$('section').fadeIn();
		}
	}, 200);
});

And there we have it! A really simple effect made easy with a tiny bit of JavaScript.

You can check out the fiddle over here: http://jsfiddle.net/LJVMH/