Wednesday, December 16, 2015

Robot Revolution Battlefields and "Driverless Cars"

I have often moved the conclusion of my blog entry up to the beginning.  Logical when the blog is a discovery building exercise that attempts to go somewhere.  Sometimes I get lost along the way.

This blog entry went somewhere so I will bring the end to the beginning then the rest of the blog entry will plod to where it went to get there.

This video  by George Hotz published 16 Dec. 2015 is where this blog  entry finished, then was brought up to the start:

Meet the 26-Year-Old Hacker Who Built a Self-Driving Car... in His Garage:

https://www.youtube.com/watch?v=KTrgRYa2wbI

George Hotz, Wikipedia

George Hotz on Bloomberg  An excellent read with this update following it.

Then plod on down this page on the my self driving road to how I got to this ending moved up to the start.................................

Wait, Wait, there is more:  Before moving on, get this:  Scaling from the granular level to the aggregate level with a seamlessly on the foundation of the granular entity as an instance of its class is the model of my digital positive money monetary system.  Exactly like the performance of a single automobile on the road with millions of other vehicles there is no room for ambiguity....

Our current monetary system is extremely ambiguous!  All vehicles (monetary units) on the road abide by the same rules.  The problem domain is essentially the same.  So are the object oriented solutions to the problem of the person/entity Information Age relationship. 

Plodding to the end follows here........................................................where I start is not necessarily where I go..............................

"Robot Revolution – Global Robot & AI Primer" is referenced at this link:

http://ftalphaville.ft.com/2015/12/15/2147846/the-future-military-artificial-intelligence-complex/

While there are many references to this report made by Bank of America Merrill Lynch’s research team looking at the general commercial scope and impact the report is not public, AFAIK.

The references to it examine from the perspective of many different agencies.  One of those, of course is DOD where the focus is not profit but another mission, at least for those at the pointy end of the spear.  Some get killed in warfare, others make a killing.

The link presents DOD view of AI application.  DOD does top level, top down, big picture project development and application.  The link describes the big picture of DOD like this:

"So what’s the DoD looking at in terms of technology for the future of the US military? As one would expect, there’s a handy set of “five building blocks” they’ve identified regarding AI and autonomy, along with plenty of acronyms. Take a look at the US military’s take on AI and weapon tech, as laid out in Work’s speech (bolding, links and brackets our own)"

1.   Deep learning systems.

2.  Human/Machine Collabortion.

3.  Assisted Human Operations.

4.  Advanced Human-Machine Combat (command-control) Teaming.

5.  New types of network-enabled semiautonomous weapons (machine things).

Duh!

5 most obvious simple points of the future of AI development dressed up to look like a genius plan in DOD Speak Power Point with military application terms.  Basic stuff but nevertheless the blue print design for all AI development.

Machines have to learn enough to at least collaborate with humans.  Enough to do the yeoman work at the simple decision level with some degree of collaboration to avoid the dumb thing a human would do.  It also learns what is dumb and what is not.  It is a continuum with AI enabled machines Assisting Human Operations then moving to Advanced Teaming.

Wow!  It looks like a plan for spending billions/trillions in the Military/AI complex.  There will be a great amount of spin off to the commercial application of government funded AI.

The 5 points are a general purpose model of AI development.

It can be applied to "Driverless" cars.  There is no such thing as a driverless car but I addressed that in a prior blog.  Some "thing" will always be driving.

Applying the model:

1.  The AI of the car will have to record everything action the driver does.  It will have to relate that action to an object real or conceptual that sent a message to the driver to implement a method known by the driver to perform in driving the car.

An interesting development in the capability of machines to observe and learn and the magnitude of the program for them to do that is at this link:

http://arstechnica.com/information-technology/2015/12/facebooks-open-sourcing-of-ai-hardware-is-the-start-of-the-deep-learning-revolution/

The above link has this interesting video demonstrating what a machine can recognize.  Driving is about the driver recognizing things.

https://www.youtube.com/watch?v=leGKWgnyu70

In the broader sense "seeing things" is situational awareness that a computer does not have to visually see.  Things like Geolocation knowledge giving the location of speed limit zones, stop signs, speed, distance, proximity, etc.  Sight is a primary input to a driver, not necessarily as important to a computer that has other means of getting the same information.

The progression in AI driver assistance is a lot of deep learning.  Quantitative and Qualitative.  All possible information about the actions of the driver and their related things being collected and transmitted for data mining to feed AI learning.  Learning the general actions of all drivers in the aggregate and  granular level knowledge of the specific driver.  Strictly only learning.  Perhaps doing something simple and smart like applying the brakes before hitting a solid object immediately in front or behind the vehicle.  Otherwise, capture, record and transmit all possible information.  It is the Total Information model of surveillance with application to all drivers and specific drivers.

The next steps are to integrate the human and the car in the driving operation.  This is a developing transfer of responsible control to the AI capable machine with some degree of human accountability transfer to the machine. 

The end point is where the machine has full control responsibility as well as accountability that was at one time the drivers to the extent of law and the justified actions of a "reasonable person".  That becomes a reasoning machine and the accountability for its reasoning is assumed by the the designers of the AI brain.

At that point a person is truly driven.

Hope that neither cars nor battle field weapons get to that point.  Since "Battlefield Total Control" is a closed system to be Dominated it is likely that a Battle Field under Free Fire rules will get to the the point of autonomous acting machines before cars get to that point. 

The "Devil Made Me Do It"  The ultimate assignment of accountability.

What is the current level of progression of the state of the art for AI application to the Human/Machine interface laid out in the extremely general manner by the DOD view that is a sectoral example of the wider scope?

Here is a bench mark of progress emerging from the garage:

https://www.youtube.com/watch?v=KTrgRYa2wbI

It appeared as a link here:

http://www.theverge.com/2015/12/17/10374422/elon-musk-george-hotz-self-driving-car-tech-criticism

From the latter link:

"The post goes on to say that while it is "relatively easy" to create a machine learning system that is 99 percent correct, it is "vastly more difficult" to reach 99.9999 percent accuracy."

How to get from the 99% that Hotz might get on a limited stretch of road to 99.string of zeroes capability?  Holtz is programming one car with one car AI feedback.  That is granular.  What if Google used Hotz car AI operating system and applications on every car on the road.  Only in passive mode to record mass data input through a large variety and number of sensors exactly like Google did with a relatively few cars driving around the country with cameras and GPS.  That is big data for data mining.

When all that big data is created for every road in the country then there will be a tipping point between human and machine intelligence to be the primary performer of the driving task.  It all comes from granular level input of all cars on the road equipped for the task and reporting in real time to a data aggregator and miner.

Hotz at the granular level has as much opportunity to create the single entity operating system and application program as does Elon Musk.   Musk  has a bigger garage and more money.  Human Intelligence plays on a level field.  A field that may be tilted by big money and established entities, even if they are considered leading, bleeding edge. 

My sentiment goes to Hotz.  In the video catch the scene where he is talking to the camera with a water heater in the background.  Literally in the garage.  Cool.

From the Verge link:

"Bloomberg's story notes that Tesla's self-driving system uses components built by Israeli company Mobileye, which also supplies rival car manufacturers, and it's this technology that Hotz thinks his system will "crush." Tesla's blog post states: "If other car companies could meet or exceed the Tesla product by buying an off-the-shelf solution, they would do so." It's Hotz's claim that this is nearer to happening than Tesla — and Musk — might like to think."

I see the state of the art of AI moving so fast that it is the same model as electronic publishing.  Anybody can use the new tools for applications that beat not only the legacy system but the costly tools it uses.  Beat the big boys by simple taking advanced things off the shelf and applying them in new and revolutionary ways.

That is how things emerge from the garage.  Once out of the garage all they have to do is scale rapidly.  In today's environment the scale of rapidity is called viral.  When the basic one car package goes on all cars it is viral.  Cars already have the fundamental computer on board in the OBD.  The cellular com system is already established.

Hold on to your driving wheel.  It is going to be a wild ride into the future.

Maybe the day will come when the Indianapolis 500 pits a human driver against a computer driver.  It happened with John Henry and also the game of chess.  Same old story line with Terminator.

The Industrial Age was man using machine, in control of it.

The Information Age is at the stage of partnership.  When machine dominates on the track or on the field there will be a new game and winner of the Olympics.  One of the Olympic winners was a partnership of man and machine.  The DOD crystal ball sees that going even farther and faster to win.

George Hotz in his garage.  Currently I am creating my ideas in a workshop.  Sleeping in a trailer next to it.  Best ideas come from places like that, (comma), not cubicles.  Both are geometrically the same but the Intelligence exercised in them make them a different container.

A report on California draft rules for self driving cars is at this link:

http://www.wired.com/2015/12/californias-new-self-driving-car-rules-are-great-for-texas/

It contains a link to the official draft:

http://dmv.ca.gov/portal/dmv/detail/pubs/newsrel/newsrel15/2015_63

The link mentions the California Partners for Advanced Transportation Technology (PATH)

Trevor Darrell is the new director appointed 2015.

An exact quote from the link:  "Trevor's area of expertise is in algorithms for large-scale perceptual learnin." Tongue in cheek academic humor?

"Trevor's group has recently introduced new software framework for open-source deep learning called CAFFE."   Hurrah for Open Source!  Trevor does have past connections to Big Auto.  Open source however demonstrates anti-proprietary advantage of profit motive as well as influence.

CAFFE: the following extracted from the website:

"Why Caffe?

Expressive architecture encourages application and innovation. Models and optimization are defined by configuration without hard-coding. Switch between CPU and GPU by setting a single flag to train on a GPU machine then deploy to commodity clusters or mobile devices.
Extensible code fosters active development. In Caffe’s first year, it has been forked by over 1,000 developers and had many significant changes contributed back. Thanks to these contributors the framework tracks the state-of-the-art in both code and models.
Speed makes Caffe perfect for research experiments and industry deployment. Caffe can process over 60M images per day with a single NVIDIA K40 GPU*. That’s 1 ms/image for inference and 4 ms/image for learning. We believe that Caffe is the fastest convnet implementation available.
Community: Caffe already powers academic research projects, startup prototypes, and even large-scale industrial applications in vision, speech, and multimedia. Join our community of brewers on the caffe-users group and Github.
* With the ILSVRC2012-winning SuperVision model and caching IO. Consult performance details."

It is obviously a vision acquisition oriented deep learning AI with a stated Github community.  

Much can be learned from processing 60M images per day!  Unsaid is how many raw images can be captured for input to processing now limited by capacity of the GPU.  Images are one thing, I would guess.  Video is another.  That is where we learn, real time real "video".

There is an extreme depth of relevant specific sector links at the CAFFE site for me to wander and wonder about!

One of the core sectors "looks like" (my Mark I Mod I eyeball visual recognition input system) BVLC  Berkeley Vision and Learning Center 

"A new center for focused collaborations with industry
on autonomous perception research." 


What fun they must be having.  I wish I was smart enough to join in the fun but I am smart enough to watch them play in and on their field!






 














No comments: