Laser-Equipped Drones that Zap Small Unwanted Pests (in a fixed area)

I've seen this topic popping up in popular tech. news (mostly from innovation labs) these past few months, and they offer quite an effective solution. I remember a Youtube video detailing how an innovation lab (Intellectual Ventures) plans to use it in the fight against malaria, i.e. stationing guard posts at all 4 corners of a farmer's residence and zapping the mosquitoes with photonic lasers. (more information in the link below)


Figure 1. A time-lapse photo of a mosquito getting zapped is shown (t=0 starting at the leftmost frame).
Source: http://www.intellectualventures.com/assets_inventions/142/shootdownsequence2__large.jpg

Technically, the idea has already been around for years, first introduced to the public around 6 years ago. But it has only gained traction quite recently. Take an application to ichthyology as an example. An increase in marine ectoparasites due to changes in climate and weather patterns have been causing Salmon fishermen sleepless nights. But such nights have been lessened with the help of lice-hunting underwater drones. In the fish pens of the far North Sea in Norway, they perambulate underwater, scouring the premises for sea lice. These bots will fry sea critters at a distance of as much as 2 meters. Their identification mechanism is similar to how smartphones pick out human faces, but at a much faster pace.  Thus far, estimates connote that only 2 such drones will be needed per fish pen.


Though promising as it is, the solution still needs to be backed by formal documentation to be worth considering as an alternative to more expensive and tedious methods.

It does make me wonder if we could use other naturally occurring factors in our environment as a substitute for lasers. Would it be possible or cheaper to focus sound waves on a certain locale causing permanent injury to an insect a millimeter in size? Or perhaps a giant mosquito net would prove more economical?


Deep, Deep Learning, Artificial Intelligence and the Race Towards Quantum Supremacy


Most prominent in machine vision applications (such as self-driving cars and cancer identification), machine learning has taken almost all sectors of the industry by storm. This year, things have gone up a notch with Google's push to create a 7x7 array of qubits (a portmanteau of the words "quantum" and "bit") on a single integrated circuit.

Figure 2. Google's 2x3 qubit array quantum computing chip is shown above.
Source: Lucero, E. (2017, June) "Google Aims for Quantum Computing Supremacy."  IEEE Spectrum, p. 8



Being a mere dilettante on quantum computing, the main issue that this technology seems to address is on error correction. Physicists say such a system is still far-fetched from what truly motivates the study (which I believe is the replication of the human brain?). But if Google succeeds in this endeavor, it will have a powerful decryption tool before the year ends. By the way, Google isn't the only player making significant strides in this field. IBM also pledged to jump-start a project for a 50-qubit system in the coming years. What's more is it plans to make such a system accessible to the cloud!



Figure 3. Shown is a car equipped with self-driving hardware.
Source: https://media.wired.com/photos/59b06158138e953cd9647f8e/master/w_2400,c_limit/lyftselfdriving-TA.jpg

Are you ready to give up the driver's seat for a set of preconditioned algorithms? If your answer is yes then sadly we're not on the same page. If you're answer is no, we're still not on the same page because I don't have a car (i.e. I have a penchant for taking walks and using public transportation). Weighing the bliss one feels of actually driving down an open highway to just sitting in the passenger's seat is quite difficult for one who lacks in experience. But one thing I am firm about - I wouldn't like a huge ugly chunk of whatchamacallit sitting at the top of my car! *Blech*

The self-driving car has caused a lot of ruckus in the media, yet they seem to have failed to gain the public's favor. (see the link below for more information)


Could it be because of all the accidents and mishaps during test runs? Or perhaps the security risks?

I personally took a M.O.O.C. on machine learning around 2 years ago from Coursera, so I got a good grasp on what is essentially happening inside a self-driving car. The biggest problem that came to my mind at that time was - if the algorithm/neural network adapts to a training set that is provided by "us" - then how can we ascertain that we have fed it with enough training sets. This can be done computationally, but in a practical sense provided a chaotic world brimming with stochastic processes - how sure is sure enough?

Again, I merely have a smattering background on machine learning, making my statements above quite contentious.