Folks I've just upload a video of my latest project, a Connect 4 playing robot that uses an NXTCam to identify the moves made by the human player:
Basically the camera has modified firmware on it that allows me to grab frames or individual scan lines from the camera. The NXT uses these to scan for moves being made by the human player. The actual game play code is based on the 4RowBot project, modified to use my move sensing code and work with my robot player.
The image processing code uses the new high speed i2c mode supported by leJOS to grab the video data. This runs at 125Kbps (standard NXT i2c runs at 9.6Kbps). But even at this speed it takes over 20s to grab a full frame (352x288 pixels) and a full image would occupy most of the available RAM, so the move detection only checks a sub-set of the available scan lines. Doing things this way means that a move can be detected in around a second or so. The scanner is able to identify cells that contain red or yellow tiles or that are empty. One of the big problems is changes in the lighting, to handle that the program uses a section of the board to act as a control point and runs an adjustment routine to select an exposure setting that will give more consistent tile color readings...