If you ever wondered how to make a button in DVD Studio Pro to play a random track, get my script here.

Yes, the write up took a lot longer than the script itself. And yes, it is possible that DVD Studio Pro 4 has something like that builtin. I wouldn’t know ’cause I don’t have version 4 and I can’t find the manual online.

It took a while, but my movie pages are finally online. Let me know if they work for you. I’ll be editing the movie posts to point to the right pages.

By the way, I realised that the smaller sized mp4′s are the perfect size for the new iPod “video”. You should be able to download them and put them straight onto your iPod. See, there’s a reason to be standards compliant :-)

On an unrelated note, I noticed last week that the backups on my old linux file, mail and printserver were failing. To give you an idea how old this machine is: it’s a Pentium II 350 MHz with 128 MB RAM running RedHat 7.3. The machine runs rdiff-backup (recommended!) on a RAID 5 array. The backups had failed somewhere in the last month, causing rdiff-backup to revert the backup-area to the previous known good backup state. Trouble is, among the things being backed up is my home directory with my mailbox in Maildir format. In my mailbox there’s a spam folder containing about 14000 spam mails at the moment – I get about 100 spam mails a day. Reverting these 14000 mails caused my ancient 0.11 version of rdiff-backup to rapidly consume all available memory. I guess there was some recursive code in it. I couldn’t just delete all spam and be done with it, because the problem was in the already backed up files. I could delete all backed-up files and start all over but there were a few things there whose recent history I wanted to keep. So I set on the task to update rdiff-backup to it’s latest 1.0.1 version.

I wasn’t very happy doing this since I know that this is such an old version of RedHat Linux that updating stuff can be a chore. My fear became reality: compiling the new 1.0.1 version failed because it needed a new version of librsync. But, as it turns out, Dag Wieers still maintains an archive with updated RedHat 7.3 rpms. apt-get update and apt-get upgrade took care of librsync. rdiff-backup 1.0.1 compiled without even a warning. Running the new rdiff-backup cleanly reverted the backup area and my automated backups started working again. All in all the proces took about an hour and a half, thanks to Dag and the rdiff-backup team caring about old machinery. Thank you guys!

This was the second time in the last three years I had to work on this small server. Last year a harddisk failed. Took me a while to find a 40 GB drive for the RAID 5 array :-) Remember, this machine can just about run Windows 98. Linux can be good for the lazy sysadmin.

In case you’re wondering about the 14000 spam mails I’m keeping: I’m building a spam corpus for Spam Assassin or some other bayesian spam filter. I hope this will work better than my provider’s spam filter and Apple Mail’s spam filter combined. I just haven’t found the time to implement it. Reminds me about something: time to install rdiff-backup on my MacOSX machine.

In the past month I’ve been working an article that starts like this:

In the last year, a lot of cheap (or not so cheap) HD video cameras came onto the market. These cameras are targeted towards consumers or professional users at the bottom of the scale (“prosumers”). This page tries to give a comprehensive overview of the different cameras and their characteristics.

It’s not finished yet, but Mike Curtis has beaten me on speed anyway. So, if you’re interested in a shiny new “cheap” HD camera, go read his article for an overview of the market in the next couple of months.

A couple of remarks: the Panasonic HVX-200 looks like the camera to beat, but it isn’t out yet, nobody knows the exact tech specs and only a few have seen footage shot with it. The same goes for the Canon XL-H1 although that one will be on the market sooner. Please remember that Mike Curtis’ overview is written from the perspective of indy film makers. If you’re looking for run-and-gun, ENG or documentary camera, the Sony Z1 or the more expensive Canon XL-H1 will be hard to beat. The JVC HD-100 needs attention to get good images out of it (manual focus and iris) and has lousy low light performance. Battery perfomance is awfull, it won’t record 720p60 or 720p50 and there’s also the split screen issue. Panasonic P2-media is expensive and holds only 8 minutes at highest quality. You will have to fiddle with hard disk recorders and/or laptops if you shoot hours of footage a day with a Panasonic HVX-200.

Especially for European users, I’d like to note this:

  • The Sony Z1, JVC HD-100 and Canon XL-H1 are multiformat cameras, PAL and NTSC compatible. All others have a specific PAL or NTSC version.
  • The Sony FX1e doesn’t have a fake 24 p (CF 24) solution in Europe (although you wouldn’t want to use it if it existed).
  • The Canon XL-H1 will need a factory modification (meaning €€) before it will record 1080i50.
  • According to this post the Panasonic HVX-200 won’t be having a 24p feature in PAL countries.
  • The JVC HD-100 comes in two versions in Europe, a HD-100 without firewire input and a HD-101 with.
  • The state of HD in Europe is immature. HD television sets are rare and it’s not decided yet whether the HD broadcast standard will be 1080i50 or 720p50. The EBU leans towards 720p50 but the only European HD station is 1080i50. One of the biggest selling points for a Sony FX1 or Z1 is that it’s a damn good 16:9 DV cam. You can always edit in DV with it, even if you shoot in HD.

Update 2005-10-18:

According to this page, the Canon XL-H1 will be delivered as a 50 Hz camera in Europe (1080i50 and a pseudo progressive 25p solution called 25F) with the option to add 60i, 30f and 24f recording.

In case you’re wondering what’s happening, I’m working on the “business” end of my site. The goal was to embed my movies into simple presentable pages. I succeeded but in the process I learned once again why I don’t want to do this web development thing for a living.

Here is the goal: to embed mp4 movies in my pages while keeping them XHTML compliant. I don’t want to force people into one and only one plugin, I don’t want the plugin to load whenever you load the page and the page should be compatible with as much browsers as possible but at the same time stay very simple. I don’t want a special case for each and every browser.

The plan of attack was: create a large image on every page, and replace this image with the movie whenever it’s clicked. Straight and simple DHTML.

To replace an image, you need to find it (getElementById), find it’s parent (parentNode) and replace it with another element (parentNode.replaceChild(new_element, old_element). Easy enough.

To create a movie object, you need this code (official Apple Quicktime embedding code):

<OBJECT CLASSID="clsid:02BF25D5-8C17-4B23-BC80-D3488ABDDC6B"
 CODEBASE="http://www.apple.com/qtactivex/qtplugin.cab"
 HEIGHT=yy
 WIDTH=xx>
  <PARAM NAME="src" VALUE="MyMovie.mov" />
  <EMBED SRC="MyMovie.mov"
   HEIGHT=yy WIDTH=xx
   TYPE="video/quicktime"
   PLUGINSPAGE="http://www.apple.com/quicktime/download/" />
</OBJECT>

That doesn’t really look like XHTML to me and it’s not plugin-agnostic either. But it’s a start. I knew from Flash Satay that it’s possible to massage this kind of code into something more sensible. After a few hours, I came to this result: the minimum amount of code you need to get Mozilla-derivates to load a movie into a webpage:

<object data="movie.mp4" type="video/mp4" width="384" height="304" />

Nice and clean. You need to add 16 pixels to the height for the controller bar of the plugin but other than that it’s perfect. But it doesn’t work in Safari. Safari needs a param telling it what the src of the object is, allthough it’s clearly mentioned in the data attribute. So we come to this:

<object data="movie.mp4" type="video/mp4" width="384" height="304">
  <param name="src" value="movie.mp4" />
</object>

Nice thing is, this works in Microsoft Internet Explorer for PC too. But you need to wait until the whole movie has been downloaded until the plugin appears. Of course I already knew this from the Flash Satay article. So I settled on two versions: the one directly above for everyone non MSIE and this one for MSIE:

<object classid="clsid:02BF25D5-8C17-4B23-BC80-D3488ABDDC6B"
 codebase="http://www.apple.com/qtactivex/qtplugin.cab"
 width="384" height="304">
  <param name="src" value="movie.mp4" />
</object>

This clearly leaves MSIE users with no other choice but the Quicktime plugin but at least they won’t have to wait ’till the movie has finished downloading. I could try the same trick as Satay by using a quicktime reference movie which loads another one (use a qtsrc parameter on the object to point it to the final movie) but then my viewers would have to click twice before the final movie appears.

Next thing I tried was Internet Explorer for Macintosh. The horror. It needs an <embed...>-tag. Back to square one it seems. So the code for MSIE became:

<object classid="clsid:02BF25D5-8C17-4B23-BC80-D3488ABDDC6B"
 codebase="http://www.apple.com/qtactivex/qtplugin.cab"
 width="384" height="304">
  <param name="src" value="movie.mp4" />
  <embed src="movie.mp4"
   height="384" width="304"
   type="video/mp4"
   pluginspace="http://www.apple.com/quicktime/download/" />
</object>

Not XHTML compliant, but somehow I think MSIE users don’t really care.

Next stop: javascript. I’ve never ever written a single javascript in my life. My first try went like this:

function load_player(movie, hsize, vsize) {
  var W3CDOM = (document.createElement && document.getElementById);
  if (!W3CDOM) return;
  var player = document.getElementById('player');
  var p = player.parentNode;
  var el = document.createElement('object');
  el.setAttribute('width', hsize);
  el.setAttribute('height', vsize + 16);
  if (navigator.appName == 'Microsoft Internet Explorer'){
    var prm = document.createElement('param');
    prm.setAttribute('name', 'src');
    prm.setAttribute('value', movie);
    el.appendChild(prm);
    var mbd = document.createElement('embed');
    mbd.setAttribute('src', 'movie');
    mbd.setAttribute('pluginspace', 'http://www.apple.com/quicktime/download');
    mbd.setAttribute('width', hsize);
    mbd.setAttribute('height', vsize + 16);
    el.appendChild(mbd);
  } else {
    el.setAttribute('data', movie);
    el.setAttribute('type', 'video/mp4');
    /* for Safari */
    var prm = document.createElement('param');
    prm.setAttribute('name', 'src');
    prm.setAttribute('value', movie);
    el.appendChild(prm);
  }
  p.replaceChild(el, player);
  el.setAttribute('id', 'player');
}

This thing was meant to be called from on onClick handler on an image with id of "player". The non-MSIE side of the function worked allright, but MSIE took exception to the attachement of the <embed> tag to the <object> (the line that says el.appendChild(mbd)). Whatever I tried, I couldn’t get rid of this. Technically MSIE is right, you cannot have an <embed> tag in an XHTML document but this wasn’t the time nor the place for being pedantic.

Kind of an awkward situation: I need the <embed> tag for MSIE Mac to function, but MSIE (PC and Mac) won’t let me write it into the document. Then it dawned upon me: I needed to inject some dirty HTML in the document and I’m using all kind of good mannered methods. Dirty results call for dirty deeds and the deed called for here has the name element.innerHTML (a Microsoft invention).

The only thing needed was something to put the dirty code into. I put a <div> around the <img> and was able to use this <div>‘s innerHTML to create the necessary object and embed‘s.

I tried to be nice and let the non-MSIE code leave the <div> alone and just replace the <img> but that resulted in a movie plugin and empty space the size of the replaced image. So for now I’m replacing the whole <div> and it seems to work OK.

As a result, the body now has the following structure:

<div id="player">
  <img src=... onClick="load_player('movie.mp4', width, height)" />
</div>

and the Javascript function looks like this:

function load_player(movie, hsize, vsize)
{
  var W3CDOM = (document.createElement && document.getElementById);
  if (!W3CDOM) return;
  /* first do the IE stuff. */
  if (navigator.appName == 'Microsoft Internet Explorer'){
    var player = document.getElementById('player');
    var str = '<object classid="clsid:02BF25D5-8C17-4B23-BC80-D3488ABDDC6B" ';
    str += 'codebase="http://www.apple.com/qtactivex/qtplugin.cab" ';
    str += 'width="' + hsize + '" height="' + (vsize+16) + '">\n';
    str += '<param name="src" value="' + movie + '" />\n';
    str += '<embed src="';
    str += movie;
    str += '" type="video/mp4" pluginspace="http://www.apple.com/quicktime/download" ';
    str += 'width="' + hsize + '" height="' + (vsize+16) + '"></embed>\n';
    str += '</object>';
    player.innerHTML = str;
  } else {
    var player = document.getElementById('player');
    var p = player.parentNode;
    var el = document.createElement('object');
    el.setAttribute('data', movie);
    el.setAttribute('type', 'video/mp4');
    el.setAttribute('width', hsize);
    el.setAttribute('height', vsize+16);
    /* for Safari */
    var prm = document.createElement('param');
    prm.setAttribute('name', 'src');
    prm.setAttribute('value', movie);
    el.appendChild(prm);
    p.replaceChild(el, player);
    el.setAttribute('id', 'player');
  }
}

You can see an example here. Let me know if it works for you, especially if you have a strange combination of browser and movie player plugin (I’m looking at you, lonely linux user).

I heard a rumour somewhere W3C is planning to drop the <img> tag and use <object>‘s instead. Somehow I disagree.

La Pasión started as a small, single afternoon movie Roeland and I planned to make for school. We quickly realised it could be something bigger, so we reworked the scenario and filmed the next sunday, and then the next sunday and then another and another sunday. It took quite a long time because most of the time we had to wake the actors first (Hi, Jeremy and Elliot) and clean the guest/party room next. But most of the time was spend setting up shots. It’s amazing how much longer it takes when using a tripod and all manual camera settings compared to hand held shooting with a camera in full auto mode.

The reason most of my previous stuff is handheld (see for instance Degoutant) is because our teacher tells us to do so. In fact, when I showed him my tripod during one of the first lessons two years ago, he told me I wouldn’t be needing that for quite some time. It might seem contra-intuitive when every bulletin board on the web tells you to use a tripod if you want “professional looking” movies, but the truth of the matter is that it’s a lot easier and faster to use a handheld camera if you want to learn about composition, camera moves, shot continuity and rhythm. The end result might not be as good looking but you do learn how to tell a story visually. I hope you see the progress I made if you look at my first movie and this one.

By the way, there are still some handheld shots in the movie. See if you can spot them. And another by the way, this is the first movie shown here shot with my new Sony FX1. I’m quite happy with it. I feel like it will take some years untill I will have exhausted the possibilities of this camera. Next on the list is sound and light design and working with a larger crew. We deliberately chose against using lights in this movie since there were only two of us: me as the director, and Roeland as the camera operator.

Anyway, here it is.

Clip Curoon is my first videoclip. Some love it, some hate it. I guess you could call it an anti-videoclip. It’s a single shot movie, 5 minutes long. This one is take 9, the last take.

Image quality could have been a lot better. It think it was around this time (september 2004) that I started thinking about a better camera. As you can see, my Sony TRV 60 just doesn’t cut it, although it looks better on TV than on the web. I’ll have to incorporate gamma correction into my compress-to-web workflow sometime.

Anyway, here it is. Mind you, it makes no sense to look at this clip in a bright, noisy environment. So, close the curtains, put on your headphones and enjoy the movie.

By the way, the actor is Steven Vrancken. I think he does a very good job. The music is by Troissoeur.

I cut these promo-clips for Troissoeur from material shot by the camera crew at Dranouter Folk Festival, edition 2004. Although the footage was captured on DV, the visual quality is outstanding. That’s the difference a professional crew and professional cameras make. The visuals on stage are made by Ruben Bellinkx.

Two clips: Curoon and Trays. Enjoy!

Nigel Cooper wrote an article called The Art of White Balancing wherein he argues that it’s best not to use manual White Balance but use the factory presets instead. His case is founded on three points:

  1. White balance was invented as a way to counter color drift in old analogue tube based cameras. Modern cameras are perfectly stable, they do not need constant calibration anymore.
  2. Perfect white doesn’t exist in real life. You’ll always see some kind of tint in whites, be it amber, eggshell, or blue. Sunsets are perfect example of a situation where manual white balance will wreck the atmosphere.
  3. Manufacturers spend a lot of effort on getting the skin tones right in the factory presets. You wreak havoc on this delicate colour balance by using manual white balance.

I really don’t know what to think about this.

If you’re working with video and the title of this post means nothing to you, you should check out Graeme Nattress’s page on Y’CbCr chroma sampling.

Yesterday I wrote a post about the problems with BitTorrent and trackers and today I find out that BitTorrent has gone trackerless.

The basic idea is that the list of peers downloading the same torrent doesn’t need to be hold by one or more central trackers, but it can be contained inside the network of peer downloaders itself. The algorithm used for this is a Distributed Hash Table. The key is a checksum of some of the info in the .torrent file, and the value correlating to this key is the list of peers. These key-value pairs form the hash table.The table is distributed according to the following principle: the node(s) whose node address is the closest towards the key holds the correlating value, in this case the list of peers.

So, originally the BitTorrent client would look at the .torrent file to find the tracker URL inside it. It would then contact this tracker, recieve a list of peers, and start downloading the file. Now, the client downloads the .torrent file, computes the key from the .torrent file, uses this key to contact the node who has the peer list (the Distributed Hash Table lookup), recieves the peer list and starts downloading. Look Ma, no hands, no tracker necessary!

It may look perfect but there is one problem: how does the BitTorrent client finds the node where the peer list resides? The good thing about a Distributed Hash Table lookup is that you do not need to know every single node on the network, but you need to know at least one of them. So how does BitTorrent find at least one of the peer downloaders?

I generated a .torrent file with the 4.1.0 Beta Bittorrent client to find out. The program hung, but it got far enough to create a torrent file. Turns out the answer is router.bittorrent.com. There is a new piece of info attached to the torrent file, called node, and it contains the address router.bittorrent.com, port 6881. Your client will contact this server to find out about the Distributed Hash Table. router.bittorrent.com is not a tracker and it won’t participate in any downloading and uploading at all. It’s just a well known node in the Distributed Hash Table.

I’m not sure if this a good idea. The tracker is gone but instead of a multiple trackers, each of them serving different torrents, we now have a single point of failure for all “trackless automatic” torrents, not just the ones confined to a certain tracker. I guess that’s why there’s a “trackless node” option, where you can fill in a “first contact” node for yourself. This means that there are three ways to set up a torrent: the old fashioned way with a tracker, the new way with a “first contact” node of your own or you could rely on router.bittorrent.com.

In conclusion, trackerless BitTorrent makes BitTorrent easier for peer-to-peer file sharing, but it will not make a difference for reliable large content serving from a shared webhost. The seeding problem remains, you will have to keep seeding your torrent for as long as you want it to be alive. The tracker part is gone, but that was the easiest one to implement on a shared webhost. The tracker protocol was just HTTP. Seeding is more difficult from a webhost: it uses long living connections and a more difficult protocol. I can’t find any PHP seeders at this moment.