The Virtual Directing went into next level when the MIT released the MIT Researcher’s Drone Camera. Before this the aerial shooting via a drone camera was never a job to do. It required a team of human operators to stick with the machines and operate the functionality to get the best cinematography on the fly. But since MIT is there, now there would be no need of a real-time human operator to stuck with the machine and follow the path. Now, this virtually directed MIT drone camera is depended on no one but itself. The team of MIT researchers unveiled this idea in the teaser that hit the airwaves this week. The teaser shows how the moviemakers can now shoot aerial scenes with the non-human technician.Moreover, the company will officially announce it to roll out as a product soon in a conference.

The team of researchers at MIT Computer Science and Artificial Intelligence Lab named this technological system as “real-time motion planning for aerial videography,” and it requires the movie’s director to input a defined parameter of shooting, crunch the settings of frame width and tightness along with the position on which it is subjected. These configurations will also be altered later when the drone is on airborne, and the plane will adjust the framing of the shooting accordingly.

Moreover, it is a matter of no surprise that the MIT Researcher’s drone camera is equipped with the sense of avoiding hurdles and obstacles on the go. On the other hand, there have been some more drone makers like DJI Mavic Pro that have already embedded tracking system and hurdles identification sensor. But MIT Researcher’s drone camera stands ahead of those because it has to offer the more augmented form of these features with granular control. This drone system tends to continually estimate and measure the velocities of moving objects from the surroundings so as to sense any obstacle on the way. This happens 50 times each second.

The researchers’ team of MIT shared the following words with airwaves:

“Unless the actors are extremely well-choreographed, the distances between them, the orientations of their bodies, and their distance from obstacles will vary, making it impossible to meet all constraints simultaneously. But the user can specify how the different factors should be weighed against each other. Preserving the actors’ relative locations on the screen, for instance, might be more important than maintaining a precise distance, or vice versa. The user can also assign a weight to minimize occlusion, ensuring that one actor doesn’t end up blocking another from the camera.”


Please enter your comment!
Please enter your name here