Radar? Vision? AI?
Machine Learning. Artificial Intelligence. Data Science. Hearing about it makes people think of technology that can solve all our problems or deadly robots which may or may not save us. AI has led to some amazing breakthroughs in the last decade: lowering traffic congestion, defeating world champions in Go and Chess, farmers producing more crops with fewer resources, and helping the Pfizer-BioNTech vaccine reach and pass clinical trials this past year.
With all its amazing triumphs, teaching a computer to do things that seem trivial to human beings has a colorful history of failures and hardships, and can lead to many unintended results and consequences, none more so than in the field of computer vision.
Today we are going to be talking about one of these exciting breakthroughs which we had the honor of facilitating.
What did we create?
We launched in the second half of 2020 a competition to solve a problem that we may have solved one way or another a hundred thousand years ago: can we tell the difference between an animal and a human being?
Sounds simple, right? Show a person on the street a picture of a human and a picture of a giraffe and even most 3 year olds will be able to differentiate.
But what if the picture wasn’t clear? What if instead of a picture, all we had was a sound recording? Or in our case, what we had was a different type of picture: A radar track. A lot of tracks. To be precise, 6,656 different tracks.
We were approached by MAFAT/DDR&D, the Israel governmental body in charge of research and development (the Israeli counterpart to DARPA) with a very interesting problem in the field of radar. After discussing possible solutions for the problem, we decided to take the “open innovation” approach and host a data science competition.
The competition’s goal was straightforward: Can we increase the reliability of radar tracks to tell us if we’ve detected a human or an animal?
To answer this, our team at Webiks set up a wide scale data science competition with open entry to any and all participants. The competition’s objective, in more formal terms, was to explore automated solutions that enable the classification of humans and animals that are tracked by radar systems with a high degree of confidence and accuracy. The participants goal was to classify our radar datasets of humans or animals using radar tracks as an input.
To ensure we had the best of the best joining the competition we invested a large prize pool (Forty thousand Dollars!) and made great efforts to ensure that the data science community is fully aware of the competition. It also goes without saying that winning first place has the added value of the winner receiving the unofficial title of “The best radar data scientist in the world”.
We had an unbelievable amount of participants from all over the world join us in our competition. 1009 different people and organizations from over 40 different countries submitted around 4300 different submissions.
Public vs private dataset
So how did everyone measure up? Once we had our results from all contestants and saw how they measured up to the public dataset, it was time for the true test: How well were they going to do with our private dataset? How well will their models generalize to new data, unseen in the training phase? This was what we had all been waiting for: did they create a genius machine learning algorithm, or was the “artificial intelligence” tailor made for the public dataset?
After months of work we had results! We asked our finalists to submit two final machine learning solutions for the private dataset and we would publish the results of the better of the two, and our top three candidates got the following results:
|inanikas (Axon Pulse)||0.9085 (2)|
Even a human trained to decipher if the data is a human or an animal doesn’t do much better than flipping a coin (0.5 ROC AUC), while our top three contestants were able to achieve the unbelievable result of being able to correctly predict over 9/10 times if the radar scanning was a human or animal!
We were blown away by the level of talent that joined the competition, and saying we were happy with the results they came up with would be an understatement. We want to thank all our contestants for giving us their time and talent for this competition and we wish to congratulate our winners, GSI Technologies, Axon Pulse, and Ido Kazma.
The winners – learn from the best!
After a long tournament we had our competition victors. Our winners used some truly creative and innovative methods to achieve their results, including Ensemble learning and radar-specific Data augmentation. Lucky for anyone who has any interest in deep learning, radar, deep and/or convolutional neural networks, or just how to be excellent in any competition, the winners of the competition, GSI Technologies and the runner up, Axon Pulse both published very informative articles on their whole process, methodology, and experience in the competition.
We highly recommend their blog posts:
MAFAT Radar Challenge Second Place:
Mafat Radar Challenge First Place:
Open innovation is a creative approach and tool that allows the highest quality distributed knowledge to be utilized by larger corporations or research centers and achieve amazing insights and results by combining the resources and vision of the larger body with the innovation and dexterity of startups and talented individuals.
We here at Webiks have become experienced at facilitating open innovation approach, and specifically data science competitions, and can host a competition in any field that you’re interested in (just ask us!) and have strong ties to the worldwide data science community.
Do you have a problem, data, or question that you think the best data scientists in the world could solve? Do you want to know more about data science and open innovation? Are you interested in joining or hosting our next competition? Learn more about us at webiks.com, subscribe to our newsletter, or contact us at firstname.lastname@example.org.