45 Billion Connected Cameras By 2022 Will Require More Data Processing Than Humans Can Deliver. Synthesis AI Has a Solution

gravatar
 · 
June 23, 2021
 · 
6 min read
Featured Image

Synthesis AI founder and CEO Yashar Behzadi will waste no time telling you there will be 45 billion connected cameras in the world by 2022. This is over one-and-a-half times the 27 billion cameras operating today and a 216% increase since 2017.

Why does the global number of connected cameras matter to Behzadi? He knows the only way to evaluate the vast volumes of data they will produce is through artificial intelligence (AI). He also knows today’s AI systems aren’t equipped to handle this volume. Nor can they address the growing problem of bias caused when humans (and the systems they’ve created) interpret the data.  

Behzadi’s solution: AI powered by “synthetic data.”  

Current AI models require vast amounts of human-annotated data to help cameras identify what they’re seeing. This is time- and labor-intensive, making it prohibitively expensive.It also has significant shortcomings. It’s difficult for humans to interpret key data attributes, such as the 3D position of an object or its interactions with its environment, and often despite best effortshumans sometimes insert their own bias into the interpretation process.   Additionally, increasing regulatory scrutiny and consumer privacy concerns make collecting and leveraging images of people complicated. 

Behzadi’s company, San Francisco-based Synthesis AI, is pioneering synthetic data technologies that provide vast amounts of artificially manufactured data that is comprehensively and accurately labeled and available on-demand. It does this in part by bringing together technologies from the visual effects and CGI industry with cutting-edge generative neural networks, in order to create photorealistic representations of the real world. And, since the data is generated, there are no underlying privacy concerns. 

With the image-recognition market estimated to boom to $86B by 2025, it’s clear that synthetic data is strongly positioned to be a central part of every computer-vision image interpretation system. On the back of the company’s recent $4.5 million fundraise, backed in part by Bee Partners, We sat down with Yashar to learn more about the Synthesis AI approach. 

What can this richness of data tagging unlock/accomplish?
Emerging computer vision applications in autonomous vehicles, robotics, AR/VR, retail, smart assistants and smart homes, require exact knowledge of the 3D world and the complex interactions between objects and agents.

Current deep-learning approaches to building computer-vision AI have leveraged supervised learning in which humans label key attributes in a scene and then machines learn to infer the labels from new images. This approach has been central to the development of new capabilities over the last few years. However, to support more complex models, a richer set of labels is required that humans are unable to provide. 

With synthetic data approaches, information of every pixel in the scene is explicitly defined. Providing labels for 3D position, material properties, surface normals, sub-segmentation, and more is inherent in the generation process. Furthermore, the data and labels can be provided on demand, allowing machine learning (ML) practitioners to experiment and iterate orders of magnitude faster than was previously possible. 

Equally as important, synthetic data is inherently private, enabling the development of human-centric AI models central to smart assistants, smart homes, and teleconferencing applications. 

Tell us about the future of Machine Learning. What is the right combination of real-world and synthetic data?
Synthetic data represents a significant paradigm shift in the development of computer-vision AI models. In today’s model, companies first deploy hardware to collect training data. This data is then cleaned, labeled, and then, finally, used to train models. After a long and expensive acquisition and labeling process, the capabilities of the product system are finally known. For complex systems like autonomous vehicles, robotics, or consumer devices, this process can take months to years.

This paradigm is inverted from nearly every other engineering discipline in which simulation tools, CAD programs, and design software are used to understand system-level trade-offs and inform overall design decisions. By this analogy, AI is at a “Wright Brothers’” stage of development in which designs are tested directly in the real-world to understand limitations and inform designs. 

With synthetic data approaches, the future of computer-vision development will involve complex system simulations in the cloud. Models will be trained virtually and optimized for deployment on target devices directly. The paradigm shift will be disruptive and enable the development of significant new capabilities. The move to virtual model development will also democratize the development of complex vision systems and usher in a new wave of startups. 

What industries and use cases are up first for synthetic data, and why?
Synthetic data will become a central part of every computer-vision model across use-cases and industries. We are first focusing on human-centered data to support applications for smartphones, smart homes, smart assistants, teleconferencing, telemedicine, and emerging human interactions systems. There is tremendous potential and investment in these areas and companies are gated by access to privacy-compliant human images. We have seen strong traction in this area and are already working with leading technology and handset manufacturers. 

How can synthetic data mitigate bias in ML algorithms? And privacy concerns?
AI systems often exhibit bias due to the skewed data used to train models. This is especially problematic in human-centered systems in which bias disproportionately affects certain ethnicities. Furthermore, these systems leverage images of people and are often built without the explicit consent of the consumers. Synthetic data is inherently private, since the data is generated. Given the programmatic nature of synthetic data, distributions of training data are balanced by design, leading to more ethical and fair AI models. 

How does your product become scalable, beyond a white glove service? When will we know we’re “there?”
The key to scalability is to provide customers with a simple-to-use product that enables them to create the image data they need, on-demand, in a self-serve manner. Our approach is to create a vertical-specific API to service a broad set of use-cases. For instance, our Face API can be leveraged to build better segmentation models for teleconferencing companies to provide new emotion-sensing capabilities for smart assistants. Powering the APIs is a highly scalable, horizontal, synthetic-generation platform that enables the rapid development of new APIs. We are also investing in techniques that generate AI-enabled assets (eg., 3D models) to allow us to create the needed diversity and breadth required for use-cases. 

The key metric for success will be the amount and regularity of image generation per API. If we are able to demonstrate sustained utilization of our API for specific verticals, that will enable us to expand to more and more key verticals.

To learn more about Synthesis AI, visit their website or
follow them on LinkedIn for regular company updates.

About Bee Partners
Founded in 2009, Bee Partners is a pre-Seed venture capital firm that partners with revolutionary Founders working at the forefront of human-machine convergence across technologies that include robotics, AI, voice, i4.0, and synthetic biology. The firm leverages a singular approach to detecting new and emerging patterns of business as well as inside access to fertile but often overlooked entrepreneurial ecosystems to identify early opportunity in large, untapped markets. Bee’s portfolio companies consistently realize growth at levels that outstrip industry averages and secure follow-on capital from the world's top VCs.

No Comments.

Bee.