Jump to content

AI drives rise of super sensors


Recommended Posts

I really did get inspired when read the two original stories. But as usual in this field, I think imagination is getting well ahead of reality. The "as it gets better" argument really doesn't hold much water when you're talking about a system that provides diminishing returns.

I love the potential of image recognition. But it's just one coarse view on the world. It's foiled by shadows, darkness, occlusion, contrast, reflections and so on. AI-based image recognition learns patterns that can often be so at odds with the way we understand vision to be useful, that it fails in specularly bizarre and unpredictable ways (eg. https://www.wired.com/2015/01/simple-pictures-state-art-ai-still-cant-recognize/ amongst others).

And I really got carried away thinking about the implications of the super-sensor. But again, it's a coarse view on the world.

In both cases they are amazing examples of what can be done with less. The potential for parlour tricks is endless, and there's even lots of practical applications. But I don't buy the evolutionary conclusion - both approaches are coarser abstractions of the sensed variables. Coarser abstractions give you a wider view (you can detect more things), but they can never add information. Information is lost in every abstraction that cannot be recovered. There will always be cases where a direct sense method is the only way to achieve specificity, accuracy or noise-immunity.

Technically I can hear every conversation I'll ever need to hear without leaving my office - I just need to amplify the vibrations of my desk. In practice I'm better off picking up the phone.

Link to comment
Share on other sites

Personally, I think that we will end up with a mix of specialist sensors and more general ones that leverage cognitive computing and machine learning. 

The example of failures above misses an important fact about machine learning is that such systems are trained. yes they may get it wrong but over time they get better and better, and are usually more accurate than humans in the end. Also, such machines will improve over time. The latest announcement on Google Lens has great promise I think.

I liked this example of cognitive computing where I think super sensors could be used.

"One example of the application of cognitive computing in IoT is in health care for the elderly in their own homes (curtesy IBM). Asking the elderly to wear sensors is problematic because they may not raise an alert when they should or they alert when they shouldn’t and people stop wearing them after a while. An alternative approach is to instrument other things in the house such as fridge doors, light switches, bathrooms, movement sensors, and maybe infrared sensors etc. The cognitive software can then build up an understanding of what normal looks like. When something abnormal happens, the system can then raise an alert and make a call to the emergency services."

Link to comment
Share on other sites

18 hours ago, Tim Kannegieter said:

The example of failures above misses an important fact about machine learning is that such systems are trained. yes they may get it wrong but over time they get better and better, and are usually more accurate than humans in the end. Also, such machines will improve over time.

 

I don't know where people keep getting that notion. I suspect it is fuelled by works of science fiction. Machine learning does not keep getting better with more training. It reaches a point where it starts to either specialise (ie. it has learned the training set so well that it can no longer generalise) or the accuracy hits an asymptote (ie. there is no better hyperplane that divides the input vector), at which point it is over trained. Getting machine learning right is about ensuring you implement the right stopping conditions. And the machine learning referenced by Jeff Clune in the article I rerferenced is the state of the art - they've been trained as best as the best ML practitioners can manage, and they're easily fooled. ML can also never be "more accurate than humans" - accuracy requires supervised learning and supervised learning requires humans to provide the classifications for the ML to learn from. So humans must set the bar for accuracy which the ML attempts to meet.

The whole profession will get better certainly, but in the 30 or so years that ML has been under serious development, we've gone from being able to recognise objects in images to being able to do it faster (and market it better!). There's some fundamental challenges. ML is and always has been, an excellent tool. There's no evidence so far that it will lead to some kind of general intelligence with no bounds.

 

Quote

I liked this example of cognitive computing where I think super sensors could be used.

 

Yeah, loved that one too! But that was the opposite of a super sensor right? They're suggesting that instead of having one wrist worn sensor to rule them all, you litter specialised sensors about the place to build up a image based on lots of different inputs.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...