New Invention – VMX Project: Real-time training of visual object detectors

Webapp for real-time training of visual object detectors and an API for building vision-aware apps. Let our vision empower your vision.
Support this Project on Kickstarter: http://www.kickstarter.com/projects/visionai/vmx-project-computer-vision-for-everyone

What if your computer was just a little bit smarter? What if it could understand what is going on in its surroundings merely by looking at the world through a camera? Such technology could be used to make games more engaging, our interactions with computers more seamless, and allow computers to automate many of our daily chores and responsibilities. We believe that new technology shouldn’t be about advanced knobs, long manuals, or require domain expertise.

The VMX project was designed to bring cutting-edge computer vision technology to a very broad audience: hobbyists, researchers, artists, students, roboticists, engineers, and entrepreneurs. Not only will we educate you about potential uses of computer vision with our very own open-source vision apps, but the VMX project will give you all the tools you need to bring your own creative computer vision projects to life.

Why you’ll love VMX

VMX gives individuals all they need to effortlessly build their very own computer vision applications. Our technology is built on top of 10+ years of research experience acquired from CMU, MIT, and Google (see About The Founders section below). By leaving the hard stuff to us, you will be able to focus on creative uses of computer vision without the headaches of mastering machine learning algorithms or managing expensive computations. You won’t need to be a C++ guru or know anything about statistical machine learning algorithms to start using laboratory-grade computer vision tools for your own creative uses. Because we believe groundbreaking technology needs revolutionary creativity, our mission statement is simple: Let our vision empower your vision.

Why we built VMX

In order to make the barrier-of-entry to computer vision as low as possible, we built VMX directly in the browser and made sure that it requires no extra hardware. All you need is a laptop with a webcam and a internet connection. Because browsers such as Chrome and Firefox can read video directly from a webcam, you most likely have all of the required software and hardware. The only thing missing is VMX.

Training at your fingertips

The “killer-feature” of VMX is an effortless method for training your own object detectors, directly in the browser. We talked to many aspiring developers and quickly realized that many people’s crazy ideas involve the ability to recognize different (and sometimes quite personal) objects. By waving an object in front of your laptop’s screen, you will be able to train your own object detector in a matter of minutes. Using VMX to train your first object detector is like playing a video game for the first time, and we’ve been working really hard to give you an unforgettable first-time experience.

Creating a new object detector requires drawing a few selection boxes directly over the input video stream and then spending some time in “learning mode.” While you are in learning mode, the detector continues to run in real-time while learning about the object, making it ready for your application in a matter of minutes. You can then save a detector, or “object model,” for later use.

Running Multiple Object Detectors

You will be able to train multiple detectors for the different objects you care about. With VMX you can load, save, and manage all of your object models. You can run multiple detectors in real-time, use the GUI to make them faster or more robust, and most importantly, you can always improve your object detector later by enabling “learning-mode.”

Diverse input formats
VMX allows for a variety of different input formats. Whether it’s a webcam, a YouTube video, or a map-flyover, if you can render it on your screen, VMX can use it. This means that whether your ideal application involves processing previously recorded videos, learning from Google Image search, or having a camera watch your refrigerator, the possibilities are endless.

Advanced Model Editor

To help you train an object detector in a very difficult scenario (e.g., you want to train a Tom vs. Geoff detector), we built an advanced model editor which lets you visually tweak the learned model. The Model Editor GUI inside VMX lets you move images from the positive side to the negative side, and vice-versa. All you need to know about machine learning is that a “positive” example is what VMX thinks is the object and a “negative” examples is what VMX thinks is not the object.

Video used with permission. Video Copyright (c) of its respective owner.

Written by Thinker

Real name is Onil Maruri and I am an Entrepreneur giving a helping hand to others I can help.

Leave a Reply

Your email address will not be published. Required fields are marked *