This comes as the most significant surprise in Google hardware event; that took place yesterday, the launch of Clips; which is a tiny stand-alone AI-powered camera has lured everybody out there.
How Clips Functions?
It can record up to three hours of videos and images. And on top of that; it also chooses the best moments for you automatically. However, I’m still unsure whether or not Clips could grab the attention in the market. But technically, it is very sound and a fascinating piece of AI product.
When I was conversing with the product lead of Clips, Juston Payne; he time to time emphasized that; Clips is not made to be an accessory to the Pixel smartphone or anything else.
“It’s an accessory to anything, I’d say. It’s a stand-alone camera. A new type of camera and insofar as that any digital camera has become an accessory to a computer or a phone, so too with this,” he said. “The reason for that comes back to the fact that the intelligence is built into the device to decide when to take these shots, which is really important because it gives users total control over it.”
Clips Sounds Simple But Is it Really So?
Unlike a Google’s product like Home; that works on the cloud technology entirely; Google stand-alone camera Clips is quite a self-contained unit. It is programmed to capture your moments; most likely when you set it down somewhere in your living room; while you enjoy quality time with your family and friends.
Its pre-trained machine learning algorithms find the best clicks out of all the collection and generate your Clips automatically.
That signifies it still works even if you are an iOS user or any OS user. However, there is an app comes with it that allows you to watch the clips on your device and also share them. And impressively, the app’s interface is straightforward with a single button to manually start video recording.
“We care very deeply about privacy and control, and it was one of the hardest parts of the whole project,” Payne told me. “The thing is that until really quite recently, you needed at least a desktop or you needed literally a server farm to take imagery in, run convolutional neural networks against them, do semantic analysis and then spit something out.”
Google Went To Intel’s Movidius:
To operate its standards on the camera, Google advanced to Intel’s Movidius and its greatly low-power vision processing unit (VPU).
“In our collaboration with the Clips team, it has been remarkable to see how much intelligence Google has been able to put right into a small device like Clips,” said Remi El-Ouazzane, vice president and general manager of Movidius, Intel New Technology Group, in his company’s own announcement today. “This intelligent camera truly represents the level of onboard intelligence we dreamed of when developing our Myriad VPU technology.”
By the Time It Will Learn More About the World:
Every AI model should be trained, though, and to train Clips, the company worked with video editors and a group of image raters to prepare its models. “There’s not a great ML [machine learning] model that can say: a baby is crawling on the floor, that probably looks good,” explained Payne.
So Google accumulated a lot of its video. It then had editors in team examine the content and state what they preferred — and then the labelers viewed the clips and determined which ones they liked more, which enhanced the training element for the model.
Over time, the system masters about the people you care and which type of images you are interested in. Indeed, at $249, it’s an overpriced device, though I wouldn’t be astonished if Clips caught on and began frequent impressions on baby shower registries.