Finding child porn and Daily Show clips... without hashes or keywords Published On

LTU Technologies has made its name with image search tools. That might not sound interesting until you think of all the different ways that such tools can be used: identifying child pornography, stopping copyright violations, checking that stock photos only end up in proper locations. As LTU launches an update to its flagship Image-Seeker software, I spoke with company vice president Kevin Smith about how LTU's "image DNA" can provide better image identification than hash-based technologies and how it's already doing so for cops and companies around the world.

Finding child porn can be tough business, even when you have a suspect's computer in hand. The images you're looking for could be called anything and located anywhere; file extensions might have been altered. The computer could contain tens of thousands of pictures. Some might be hidden.

So to do the job, the entire contents of a suspect's hard drive are dumped into a massive log file in the widely-used forensic examination tool EnCase. LTU's Image-Seeker can function as a plug-in for EnCase, and it scans the log file for images. Once these are identified, the software generates "image DNA" for each picture. The DNA is a unique identifier that is based on a pixel-level examination of color, shapes, textures, object arrangement, and other elements. The result is a unique identifier that is far more robust than a simple file hash (which can be fooled by making just tiny tweaks to an image).

In the LTU system, images can never be identified in a vacuum; context is everything. Before the system can even function, it needs to be trained using databases of images. LTU has created several of its own, including a 50,000-image database of pornography. Coupled with this database, the software is generally able to handle pictures that have been resized, cropped, inverted, and had faces blacked out, all of which would be more difficult with hashes.

Smith notes that in order to properly classify newly discovered images as pornographic, the database must constantly be updated to stay on top of the latest porn trends. Over the course of the last 20 months, this has meant that the database has been jammed with animated porn and low-quality cell phone porn, both of which have exploded in popularity online.










Picture courtesy LTU Technologies

In child porn investigations, this sort of database might be useful for turning up images, but police generally will use a more specific law enforcement database of child porn images to see if something similar turns up on the suspect's computer. They can also seed the database with specific images of a child and then search for matches.

LTU already counts the French police, the Italian police, the FBI, and the Department of Homeland Security among its worldwide customers.

More than a porn filter

Such tools can also be useful in the enterprise. Last week, LTU launched Image-Seeker 2.0, which features additional tools that make it easy to use the technology to keep an eye on corporate networks. It's not just about technology (though this is a real issue for companies). It's also about violent images, corporate blueprints, and trade secrets; Smith claims that if you can define something visually, Image-Seeker can build a profile to help find it.

Accuracy obviously depends on the type of content, the database used to train the engine, and the kind of search being run. Smith claims 90 percent accuracy out of the box, though this is often tuned for specific deployments. "Customers are understanding that 90 to 95 percent accuracy is generally pretty good," he says. And companies or law-enforcement agencies that want to deploy such technology need to understand that a machine alone will never do all the work. At best, the computer can filter out the extremes, present likely matches, and can then let humans focus on making the necessary decisions.

Although the technology was developed for still images, it can also be applied to video by processing keyframes. Just as with still images, the database needs to be trained with known material, and it then performs the "image DNA" analysis on every x number of frames. This is then compared to video coming into the system in order to flag content that may be controlled by a content owner.

Such a system has obvious applications for video-sharing sites, including MySpace and YouTube. LTU won't say who it is currently working with, but it already uses its technology in this way. Smith says that the goal, though, is not just to block content but also to divvy up ad revenue and make payments—for instance, when a user uploads a clip containing a 50 Cent track in the background, the music label could get a cut of ad revenue from that video.

LTU is entering a red-hot field by adapting its tech for video-sharing sites. We've already profiled companies like MotionDSP that exist to help web companies solve these thorny content problems, but to date, no one is willing to go on record about what they're up to.