/* ---- Google Analytics Code Below */

7m视频精品广告资源

7m视频精品广告资源A Site Devoted to the Discovery and Application of Emerging Technologies.

Friday, June 26, 2020

Code as Evidence in Contracts, Disputes

A UK piece on the topic  of computing and dispute resolutions, code as evidence, which came up with respect to smart contracts.  Click through for useful and detailed links. 

The role of usability, power dynamics, and incentives in dispute resolutions around computer evidence  in Bentham’s Gaze by Alexander Hicks  

As evidence produced by a computer is often used in court cases, there are necessarily presumptions about the correct operation of the computer that produces it. At present, based on a 1997 paper by the Law Commission, it is assumed that a computer operated correctly unless there is explicit evidence to the contrary.

The recent Post Office trial (previously mentioned on Bentham’s Gaze) has made clear, if previous cases had not, that this assumption is flawed. After all, computers and the software they run are never perfect.

This blog post discusses a recent invited paper published in the Digital Evidence and Electronic Signature Law Review titled The Law Commission presumption concerning the dependability of computer evidence. The authors of the paper, collectively referred to as LLTT, are Peter Bernard Ladkin, Bev Littlewood, Harold Thimbleby and Martyn Thomas.

LLTT examine the basis for the presumption that a computer operated correctly unless there is explicit evidence to the contrary. They explain why the Law Commission’s belief in Colin Tapper’s statement in 1991 that “most computer error is either immediately detectable or results from error in the data entered into the machine” is flawed. Not only can computers be assumed to have bugs (including undiscovered bugs) but the occurrence of a bug may not be noticeable.  ... "

Talk: Robots as a Service in a Post Pandemic World

I had mentioned this talk, here is the recording:

Correspondent Jim Spohrer talks about robot tech in our pandemic futures.    ...

ISSIP Speaker Series: COVID-19 & Future of Work and Learning
Speaker: Jim Spohrer, Director, Cognitive Open Tech, IBM

Title: How will COVID-19 affect the need for and use of robots in a service world with less physical contact?

As AI and robotics come to the service world, including retail, hospitality, education, healthcare, and government, some jobs will go away, some new jobs will be created, and the income required for a family to thrive might be lessened.   In this creative session participants will be asked to engage in discussing three scenarios below – and the wicked problem of the bespoke impact on livelihood and jobs, which is creating uncertainty and concerns.   The groups will then report back on which scenarios they find more desirable.  .... 

Talk/sides here:  - https://youtu.be/RchxIKum_tI

-----------------
Jim Spohrer, PhD
Director, Cognitive Opentech Group (COG)
IBM Research - Almaden, 650 Harry Road San Jose, CA 95120
Innovation Champion: http://service-science.info/archives/2233

Why Your AI Project May Fail

This is true, if you don't have ready understanding/access into the local architecture, its much harder to get the data to train models in context.    And certainly also very hard to implement them into any sort of an deployed  model.  This is true whenever you hope to get anything used by a client, no necessarily just an AI project though.   To e clear though, you usually understand this fairly early on.   And people who are already there will usually tell you

The Dumb Reason Your AI Project Will Fail
by Terence Tse , Mark Esposito , Takaaki Mizuno and Danny Goh  in the HBR

Here is a common story of how companies trying to adopt AI fail. They work closely with a promising technology vendor. They invest the time, money, and effort necessary to achieve resounding success with their proof of concept and demonstrate how the use of artificial intelligence will improve their business. Then everything comes to a screeching halt — the company finds themselves stuck, at a dead end, with their outstanding proof of concept mothballed and their teams frustrated.

What explains the disappointing end? Well, it’s hard — in fact, very hard — to integrate AI models into a company’s overall technology architecture. Doing so requires properly embedding the new technology into the larger IT systems and infrastructure — a top-notch AI won’t do you any good if you can’t connect it to your existing systems. But while companies pour time and resources into thinking about the AI models themselves, they often do so while failing to consider how to make it actually work with the systems they have.

The missing component here is AI Operations — or “AIOps” for short. It is a practice involving building, integrating, testing, releasing, deploying, and managing the system to turn the results from AI models into desired insights of the end-users. At its most basic, AIOps boils down to having not just the right hardware and software but also the right team: developers and engineers with the skills and knowledge to integrate AI into existing company processes and systems. Evolved from a software engineering and practice that aims to integrate software development and software operations, it is the key to converting the work of AI engines into real business offerings and achieving AI at a large, reliable scale.  ... "

Dash Shopping Wand to be Bricked

As predicted, the Dash Wand has gone away.    I don't think I ever bought anything with it, just put it on a list to look at or buy later.    As noted below, they will be bricked ... and I won't be able to do anything with them at all.   Why not provide them as a means for getting a UPC code?    I assume they already oo that all the time anyway.   A simple service for good customers?  No.  They will still act like a nice strong refrigerator magnet,  and they can't brick that away.

Amazon Dash Wand No More – Alexa Shopping Device Discontinued  By Eric Hal Schwartz in Voicebot

Amazon is shutting down the Amazon Dash Wand on July 21, three years after the product scanner and Alexa voice assistant-powered device first debuted. The company isn’t just ending support for the product, it is remotely bricking them so they can’t be used at all and asking owners to send their Wands to Amazon’s recycling program, according to an email sent to Wand owners and shared with Voicebot. The decision continues Amazon’s consolidation around the Echo brand for Alexa with more versatile smart speakers and smart displays instead of the more narrowly-focused Dash Wand ... .'

Thursday, June 25, 2020

Virtualitics June Newsletter

I see that Virtualitics has a June Newsletter out, we helped test early beta versions.  I see you can now schedule a 30 day free immersive trial.  See my past experiences and comments on it with the tag 'virtualitics' below.

JUNE NEWSLETTER  @virtualitics.com

Software updates, performance improvements, and a number of new additions to our team.

SOFTWARE UPDATE
VIRTUALITICS IMMERSIVE PLATFORM VERSION 2020.3

→ New Features:

Improved plot customizations and enhanced visualizations include a refined tick mark system, plot titles, and management of trailing zeros

A new avatar system as well as optimizations in virtual reality increase speed and improve overall functionality across the environment

User interface improvements for increased software stability and an improved collaborative experience

If you have any questions or need assistance upgrading please contact us at support@virtualitics.com
.... "

Amazon and Counterfeiting Crimes

An area we also worked in, detecting counterfeits.  But then isn't this another cooperation with law enforcement?

Amazon forms Counterfeit Crimes Unit to tackle its fake goods problem
The unit will work with brands and law enforcement around the world.
By Mariella Moon, @mariella_moon in Engadget

Amazon has been grappling with a counterfeit problem for years to the point that it reportedly decided to be more cooperative with law enforcement a few months ago. Now, the e-commerce giant has formed a new division called the Counterfeit Crimes Unit that’s dedicated to taking down fraudsters selling fakes on its website. The unit is composed of former federal prosecutors, experienced investigators and data analysts, working together to find offenders and hold them accountable wherever they are in the world..... " 

Pollinating by Drone and Bubbles

Fascinating detail.   As a long time part time botanist, of interest. But how well can it effectively being done?  Will the use of drones interfere with natural methods?  Depends on the plants involved and their current pollination mechanisms.  A form of biomimicry.   Also a likely way path for developing smaller drones

Drone With Bubble Machine Can Pollinate Flowers Like a Bee
Pollen-carrying soap bubbles could provide a simple and effective method of artificial pollination  By Evan Ackerman in IEEE Spectrum

Researchers in Japan developed a drone equipped with a bubble maker for autonomous pollination.

The tiny biological machines that farms rely on to pollinate the flowers of fruiting plants have been having a tough time of it lately. While folks around the world are working on different artificial pollination systems, there’s really no replacing the productivity, efficiency, and genius of bees, and protecting them is incredibly important. That said, there’s no reason to also work on alternate methods of pollination, and researchers at the Japan Advanced Institute of Science and Technology (JAIST) have come up with something brilliant: pollen-infused soap bubbles blown out of a bubble maker mounted to a drone. And it apparently works really well.  ... " 

New Ways to Build Assistant Voice Apps

Look forward to the details on this, any way you can create better intelligence is good.   Note changes in training methods.

Google Assistant Upgrades Action Developer Tools to Streamline Building and Running Voice Apps
By Eric Hal Schwartz in Voicebot.ai

Google announced upgrades to the Google Assistant runtime engine on Wednesday designed to improve the speed and performance of voice apps. The tech giant revealed the improvements along with a handful of new and updated tools aimed at simplifying the process of building Google Assistant Actions.

The new Actions Builder feature sets up a central hub in the Actions console for developing and testing a new Google Assistant Action, showing visually how the AI responds to different conversational prompts and making it easier to train and debug the app. The idea is that the developer won’t have to keep going back from the console to the Dialogflow natural language understanding platform. All of the tools are in the console, making the whole process more efficient.

Google also updated the Actions SDK to assist with boosting that efficiency. The SDK puts every element of the voice app into files that a developer can export wherever they wish. That means the developer could build the voice app without needing to use the cloud, while still enabling them to move training data around. Using the files with Google’s improved command-line interface (CLI) also allows the developer to skip using any interface at all and just write and edit the app with code.
... '

Getting Pay for Data

Another project with aim at paying users for their data.  Links to our long term data as an asset view.

Andrew Yang Is Pushing Big Tech to Pay Users for Data
By The Verge
June 22, 2020

Andrew Yang wants people to get paid for the data they create on big tech platforms like Facebook and Google, and with a new project launching on Monday, he believes he can make it happen. ...

Yang's Data Dividend Project is a new program tasked with establishing data-as-property rights under privacy laws like the California Consumer Privacy Act (CCPA) all across the country. The program hopes to mobilize over 1 million people by the end of the year, focusing primarily on Californians, and "pave the way for a future in which all Americans can claim their data as a property right and receive payment" if they choose to share their data with platforms.

At the beginning of the year, the CCPA went into effect, granting consumers new control over their data online like the right to delete and opt out of the sale of their personal information. There's nothing in the law about tech companies paying for data (or, more specifically, paying them not to opt out), but Yang's new project is looking to show that the idea is popular with voters. The Data Dividend Project is betting on collective action as a means of changing the law and extending data property rights to users across the country. If this idea becomes law, Yang's team says it will work on behalf of users to help them get paid.

"We are completely outgunned by tech companies," Yang told The Verge. "We're just presented with these terms and conditions. No one ever reads them. You just click on them and hope for the best. And unfortunately, the best has not happened."  ... ' 


Wednesday, June 24, 2020

SAP and IBM Announce New Intelligence Offerings

Next steps between SAP and IBM partnership:  Digital Transformation towards the intelligent enterprise.

In Cision: PRNewswire  https://www.prnewswire.com/

IBM and SAP Announce New Offerings to Help Companies' Journey to the Intelligent Enterprise
ARMONK, N.Y. and WALLDORF, Germany, June 23, 2020 /PRNewswire/ -- IBM (NYSE: IBM) and SAP SE (NYSE: SAP) today announced their partnership's next evolution, with plans to develop several new offerings designed to create a more predictable journey for businesses to become data-driven intelligent enterprises. Over 400 businesses have modernized their enterprise systems and business processes through IBM and SAP's digital transformation partnership. 

Hitchhiking Drones

And had mentioned this novel idea as well in a recent post.  Also covered in considerable detail in IEEE Spectrum.  As noted will require some considerable design changes for public Transportation.

Delivery Drones Could Hitchhike on Public Transit to Massively Expand Their Range.  Riding on the top of public buses could make drone delivery much more efficient .... "

By Evan Ackerman in IEEE Spectrum.  ... 

Spot Robotic Dog Now Available

Its here, good piece on the rollout in IEEE Spectrum by Evan Ackerman.     Been noting some application plans here over several years.    Expensive, but if it effectively replaces a person or more, not really.   What then is our expectation of privacy from a camera wielding robot that could be quite intimidating?   And ... can you take the publicity of replacing a person or more?

Boston Dynamics' Spot Robot Dog Now Available for $74,500
For the price of a luxury car, you can now get a very smart, very capable, very yellow robotic dog
By Evan Ackerman

Boston Dynamics has been fielding questions about when its robots are going to go on sale and how much they’ll cost for at least a dozen years now. I can say this with confidence, because that’s how long I’ve been a robotics journalist, and I’ve been pestering them about it the entire time. But it’s only relatively recently that the company started to make a concerted push away from developing robots exclusively for the likes of DARPA into platforms with more commercial potential, starting with a compact legged robot called Spot, first introduced in 2016.  ... " 

A Domain-Specific Supercomputer for Training Deep Neural Networks

Good explanation of the phases of using computing power for these kinds of problems.

A Domain-Specific Supercomputer for Training Deep Neural Networks
By Norman P. Jouppi, Doe Hyun Yoon, George Kurian, Sheng Li, Nishant Patil, James Laudon, Cliff Young, David Patterson
Communications of the ACM, July 2020, Vol. 63 No. 7, Pages 67-78
10.1145/3360307

The recent success of deep neural networks (DNNs) has inspired a resurgence in domain specific architectures (DSAs) to run them, partially as a result of the deceleration of microprocessor performance improvement due to the slowing of Moore's Law.17 DNNs have two phases: training, which constructs accurate models, and inference, which serves those models. Google's Tensor Processing Unit (TPU) offered 50x improvement in performance per watt over conventional architectures for inference.19,20 We naturally asked whether a successor could do the same for training. This article explores how Google built the first production DSA for the much harder training problem, first deployed in 2017., TPU  ... " 

Price of Personal Data

Looking for the full report mentioned here, will post when I get a reference.  Back to the our long time looked at question of what the price of private data should be, and how should people be made to understand the implications?

Brits will sell their personal data for pennies  

Surprising findings from an Okta report on digital identity suggest Brits would be willing to part with valuable personal data for a surprisingly low amount  .... 
By  Alex Scroxton, Security Editor  in ComputerWeekly  ... 

Is Slow Neuroinformatics Private?

I have worked with companies which used machine learning image analysis to determine, for example, if a person was 'of age' in various locations.  To match with local regulatory provisions.    Could be done quite accurately, about 95%, but not perfectly.  Can that be privately done?

The Benefits of Slowness
Ruhr-Universitat Bochum
Meike Drie?en
June 15, 2020

Neuroinformatics engineers at Ruhr-Universitat Bochum's Institute for Neural Computation in Germany have developed an algorithm that estimates an individual’s age and ethnic origin with greater than human-level accuracy. The team fed the algorithm several thousand photos of faces of different ages, sorted by age. The system disregarded features that varied between images, and only considered features that slowly changed over time. In calculating the age of the people in the photos, the algorithm outperformed even human assessment. The algorithm also estimated the correct ethnic origin of the subjects in the photos with greater than 99% probability, even though the images' average brightness was standardized, making skin color an insignificant marker for recognition. ... " 

Tuesday, June 23, 2020

IBM Research Director on Science and HPCC

Continued work with high performance computing by IBM.

IBM's research director on how tech can push science beyond the pandemic
Dario Gil, who's been nominated to the National Science Board, wants to create a "science readiness reserve" to use tech's power to solve future crises.
By Mike Murphy May 22, 2020

Update (June 23): Dario Gil has now been officially appointed to the National Science Board.

The coronavirus pandemic has ushered in new alliances between the tech industry's biggest players and government agencies as the world races to limit the spread of COVID-19 and find a cure. Dario Gil, IBM's research director, has been in the thick of everything.

Gil has been serving on the President's Council of Advisors on Science and Technology since the group was revived in 2019, and he helped launch the High Performance Computing Consortium. The group brought together supercomputing resources from some of the most powerful machines in the world to tackle 51 projects — and counting — aimed at modeling the virus and potential drugs. The experience has led Gil to ponder the broader question of how tech can unite in quieter times, helping the world to prepare for the next disaster more rigorously. "It was wonderful that we could create the [HPCC], but we had to sort of invent it on the fly," Gil said. "Why couldn't we think ahead?" .... "

Drones Changing Shape

An example of how defensive activities of Drones are also at play.

Research Leads to Army Drones Changing Shape Mid-Flight
U.S. Army
June 16, 2020

Researchers at the U.S. Army's Combat Capabilities Development Command's Army Research Laboratory and Texas A&M University helped create a tool that will enable autonomous aerial drones to change shape during flight. The tool can optimize the structural configuration of Future Vertical Lift vehicles while accounting for wing deformation due to fluid-structure interaction. Fluid-structure interaction analyses generally have high computational costs because they typically require coupling between a fluid and a structural solver. The researchers were able to reduce the computational cost for a single run by as much as 80% by developing a process that decouples the fluid and structural solvers, which offers further computational cost savings by allowing for additional structural configurations to be performed without reanalyzing the fluid.  ... "

Segway's Done

We had a Segway very early in in our future stopping store.  It turned lots of heads in those days.  We taught incoming visitors to drive them.   Were they just hype?

In FastCompany:  By Mark Wilson

Exclusive: Segway, the most hyped invention since the Macintosh, ends production  The Segway brand will no longer make its two-wheeled, self-balancing namesake. ...

Now, less than 20 years after the first Segway’s release, Fast Company has learned that the Segway brand will retire the last Segway as we know it, the Segway PT. Manufacturing at the Bedford, New Hampshire, plant will stop July 15. A total of 21 employees will be laid off as a result, while 12 will stay on temporarily to handle various matters, including warranties and repairs on the Segways that have already been sold. Five employees working on Segway Discovery scooters will remain.  ... " 

" Steve Jobs said it would be bigger than the PC  ... "  And you know he can't be wrong.

Uber Crosses the road

A good example of the complexity of data mining / machine learning data.  Via O'Reilly.

Inside Uber ATG’s Data Mining Operation: Identifying Real Road Scenarios at Scale for Machine Learning    By Steffon Davis, Shouheng Yi, Andy Li, and Mallika Chawda

How did the pedestrian cross the road?

Contrary to popular belief, sometimes the answer isn’t as simple as “to get to the other side.” To bring safe, reliable self-driving vehicles (SDVs) to the streets at Uber Advanced Technologies Group (ATG), our machine learning teams must fully master this scenario by predicting a number of possible real world outcomes related to a pedestrian’s decision to cross the road. To understand how this scenario might play out, we need to measure a multitude of possible scenario variations from real pedestrian behavior. These measurements power performance improvement flywheels for:

Perception and Prediction: machine-learned models with comprehensive, diverse, and continuously curated training examples (improved precision/recall, decreased training time, decreased compute).
Motion Planning: capability development with scenario-based requirements (higher test pass-rate, lower intervention rate).

Labeling: targeted labeling jobs with comprehensive, diverse, and continually updated scenarios (improved label quality, accelerated label production speed, lowered production cost).
Virtual Simulation: tests aligned with real-world scenarios (higher test quality, more efficient test runs, lowered compute cost).

Safety and Systems Engineering: statistically significant specifications and capability requirements aligned with the real-world (improved development quality, accelerated development speed, lowered development cost).

With the goal of measuring a scenario in the real world, let’s head to the streets to study how pedestrians cross them. ... '

Sketch to Realistic Image

Seen several attempts on this, but none that impressive. We used one to quickly map out possible process decisions.   Implications of 'fake' are always there, but if is attempting to construct and refine a sketch for clear illustration, it does not have to be that. 

Chinese Researchers Unveil AI That Can Turn Simple Sketches Into Fake Photorealistic Pictures
Daily Mail (U.K.)
James Pero

Researchers at the Chinese Academy of Sciences have created an artificial intelligence (AI) that can convert simple sketches of a face into photorealistic images, extrapolating from rough and even incomplete sketches. The DeepFaceDrawing AI analyzes a drawing's details, then checks each individual feature separately against a database of facial features to construct its own image. Said the researchers, "Our key idea is to implicitly model the shape space of plausible face images and synthesize a face image in this space to approximate an input sketch. Our method essentially uses input sketches as soft constraints and is thus able to produce high-quality face images even from rough and/or incomplete sketches." The researchers said the technology aims to help users with little drawing skill produce high-quality images. ... " 

Monday, June 22, 2020

Baidu Doing Drone Forestry Inspections with AI

Recall I worked with forestry management applications, so the need here rings true.  Note the system mentioned PaddlePaddle is open sourced.

Baidu’s deep-learning platform fuels the rise of industrial AI

PaddlePaddle lets developers build applications that can help solve problems in a wide range of industries, from waste management to health care.

by Baidu  ( In TechnologyReview.  This content was produced by Baidu.
It was not written by MIT Technology Review's editorial staff. ) 

AI is driving industrial transformation across a variety of sectors, and we’re just beginning to scratch the surface of AI capabilities. Some industrial innovations are barely noticed, such as forest inspection for fire hazards and prevention, but the benefits of AI when coupled with deep learning have a wide-ranging impact. In Southeast Asia, AI-powered forest drones have helped 155 forestry bureaus expand the range of forest inspections from 40% to 100% and perform up to 200% more efficiently than manual inspections. 

Behind these smart drones are well-trained deep-learning models based on Baidu’s PaddlePaddle, the first open-source deep-learning platform in China. Like mainstream AI frameworks such as Google’s TensorFlow and Facebook’s PyTorch, PaddlePaddle, which was open sourced in 2016, provides software developers of all skill levels with the tools, services, and resources they need to rapidly adopt and implement deep learning at scale. ... " 

Simple Economics of the Blockchain

Considerable and interesting piece, well worth reading if you are considering blockchain use. And a proposition of the Blockchain's value.    Not very technical.

Some Simple Economics of the Blockchain
By Christian Catalini, Joshua S. Gans
Communications of the ACM, July 2020, Vol. 63 No. 7, Pages 80-90  10.1145/3359552

In October 2008, a few weeks after the Emergency Economic Stabilization Act rescued the U.S. financial system from collapse, Satoshi Nakamoto34 introduced a cryptography mailing list to Bitcoin, a peer-to-peer electronic cash system "based on crypto graphic proof instead of trust, allowing any two willing parties to transact directly with each other without the need for a trusted third party." With Bitcoin, for the first time, value could be reliably transferred between two distant, untrusting parties without the need of an intermediary. Through a clever combination of cryptography and game theory, the Bitcoin 'blockchain'—a distributed, public transaction ledger—could be used by any participant in the network to cheaply verify and settle transactions in the cryptocurrency. Thanks to rules designed to incentivize the propagation of new legitimate transactions, to reconcile conflicting information, and to ultimately agree at regular intervals about the true state of a shared ledger (a blockchain)a in an environment where not all participating agents can be trusted, Bitcoin was also the first platform, at scale, to rely on decentralized, Internet-level 'consensus' for its operations. Without involving a central clearinghouse or market maker, the platform was able to settle the transfer of property rights in the underlying digital token (bitcoin) by simply combining a shared ledger with an incentive system designed to securely maintain it.

From an economics perspective, this new market design solution provides some of the advantages of a centralized digital platform (for example, the ability of participants to rely on a shared network and benefit from network effects) without some of the consequences the presence of an intermediary may introduce such as increased market power, ability to renege on commitments to ecosystem participants, control over participants' data, and presence of a single point of failure. As a result, relative to existing financial networks, a cryptocurrency such as Bitcoin may be able to offer lower barriers to entry for new service providers and application developers, and an alternative monetary policy for individuals that do not live in countries with trustworthy institutions. Key commitments encoded in the Bitcoin protocol are its fixed supply, predetermined release schedule, and the fact that rules can only be changed with support from a majority of participants. While the resulting ecosystem may not offer an improvement for individuals living in countries with reliable and independent central banks, it may represent an option in countries that are unable to maintain their monetary policy commitments. Of course, the open and "permissionless" nature of the Bitcoin network, and the inability to adjust its supply also introduce new challenges, as the network can be used for illegal activity, and the value of the cryptocurrency can fluctuate wildly with changes in expectations about its future success, limiting its use as an effective medium of exchange.

In the article, we rely on economic theory to explain how two key costs affected by blockchain technology—the cost of verification of state, and the cost of networking—change the types of transactions that can be supported in the economy. These costs have implications for the design and efficiency of digital platforms, and open opportunities for new approaches to data ownership, privacy, and licensing; monetization of digital content; auctions and reputation systems.   ..." 

Microsoft Relaunches Cortana without Alexa

Was initially impressed by the cooperation between Microsoft and Amazon, but admit I saw little from it.  Why not not have a joint effort to provide assistant intelligence?   All I saw was that you could ask Cortana things like "Ask Alexa ... "  but with little new regard for context of the question.  So you then could ask 'universal' things like the weather, wikipedia entries, states of connected systems ... but nothing that showed intelligence beyond the context of the device involved. Not very impressive beyond language processing.   And now MS has disconnected entirely, but promises continued collaboration.  Below a bit dated, but some insight to the state of assistance.

Microsoft launches Cortana app for Windows 10 without Amazon’s Alexa
By Khari Johnson in Venturebeat

Microsoft AI assistant Cortana is getting a dedicated app today for Windows 10 PCs. Unlike Cortana in the Start menu or pinned to the taskbar, the AI assistant can now function in a dedicated space users can resize, move, and interact with like any other PC program. Cortana responding to text commands in a dedicated app can be used to do things like start meetings, create reminders, ask for info from some native Microsoft apps, automatically suggest responses, and respond to questions like “Do I have an email from my boss?”

At launch, Cortana lead Andrew Shuman told VentureBeat the dedicated Cortana app will not respond to Alexa queries. In what may be the largest such partnership in the spirit of a multi-assistant world, Amazon and Microsoft partnered up in August 2018 to make Cortana available via Amazon Echo speakers and Alexa available through Windows 10. Few public steps have taken place since then to advance or deepen the partnership.  .... '


The Natureof Visual Illusions

Always have been interested in visual illusions, and here a study of probably the most famous one.  These gives us insight into how our brains and seeing apparatus work together in practice.  Shows also too that direct biomimicry may not always be best.    Image examples at the link.

Study sheds light on a classic visual illusion
Neuroscientists delve into how background brightness influences our perception of an object.

By Anne Trafton | MIT News Office

It’s a classic visual illusion: Two gray dots appear on a background that consists of a gradient from light gray to black. Although the two dots are identical, they appear very different based on where they are placed against the background.

Scientists who study the brain have been trying to figure out the mechanism behind this illusion, known as simultaneous brightness contrast, for more than 100 years. An MIT-led study now suggests that this phenomenon relies on brightness estimation that takes place before visual information reaches the brain’s visual cortex, possibly within the retina.

“All of our experiments point to the conclusion that this is a low-level phenomenon,” says Pawan Sinha, a professor of vision and computational neuroscience in MIT’s Department of Brain and Cognitive Sciences. “The results help answer the question of what is the mechanism that underlies this very fundamental process of brightness estimation, which is a building block of many other kinds of visual analyses.”  ... ' 

Sunday, June 21, 2020

IBM Introduces Watson Works with AI

Newly introduced:

Watson Works
Work safe, work smart, and ensure the health and productivity of your people in a changing workplace  ... 

Watson Works is a curated set of products that embeds Watson AI models and applications to help you:

Decide when employees can return to the workplace 
Organize and manage facilities and adhere to new protocols
Answer customer and employee questions on COVID-19
Maximize the effectiveness of contact tracing
Secure and protect your employees and organization

Schedule a consultation  ... 

Marty the Robot at Stop&Shop

Good view of what they are doing with robots at Stop@Shop. Mostly to map and feed video images and alerts to spills.  Will not talk to the shopper.  Impressive rollout.   One of the first examples of in store walking with the shopper.  Have never seen it live, but will make a note to.



Marty the Robot Rolls out AI in the Supermarket  in AI Trends
Marty the supermarket robot is among the first to travel with customers in the store, looking to avoid collisions and find spills.    By John P. Desmond, AI Trends Editor 

When six-foot-four inch Marty first rolled into Stop & Shop, the robot walked into history. Social robot experts say it is among the first instance of a robot deployed in a customer environment, namely supermarkets in the Northeast. 

Marty rolls around the store looking for spills with its three cameras. It does take the place of the human worker, called an associate, that did the same thing, but it means the associate can do something else. Doing the walk-around of the store is seen as a mundane task. 

Marty does not talk or tell jokes. Unlike Alexa, who many children in the store undoubtedly interact with at home, Marty will not respond. The robot does notify associates when it sees with its computer vision that something on the floor needs to be cleaned up, through the public address system. An associate comes over to clean it up, and presses a button on Marty that it’s done. Marty takes a picture of the cleaned-up aisle.    ..." 

Badger Technologies Rolled Out 500 Martys in 2019 

The AI in Marty is concentrated on the machine vision and the collision-avoidance navigation features, according to Tim Rowland, CEO of Badger Technologies, makers of Marty. After trials, Badger rolled out 500 multi-purpose robots into Stop & Shop and Giant/Martin’s grocery stores on the East Coast over the course of 2019. Each Marty is equipped with navigation systems, high-resolution cameras, many sensors and its software systems.   ... "

At the link, much more detail and Images.

Blogger Being Updated

I have been alerted that the underlying Blogging capability here will be updated by the end of June.  A quick test seems to show that no problems should occur, but it's possible attached resources will change.   Could cause display changes.   If you see anything amiss, inform me via the email on this page.   The plan is to move ahead seamlessly.

To Date:  20K Posts,  2,492K Reads

Oil and Gas use of AI Technology

Worked with this industry for a while. Here a good non-technical overview:

Oil & Gas Industry Transforming Itself with the Help of AI   By AI Trends Staff

The oil and gas industry is turning to AI to help cut operating costs, predict equipment failure, and increase oil and gas output.

A faulty well pump at an unmanned platform in the North Sea disrupted production in early 2019 for Aker BP, a Norwegian oil company, according to an account in the Wall Street Journal. The company installed an AI program that monitors data from sensors on the pump, flagging glitches before they can cause a shutdown, stated Lars Atle Andersen, VP of operations for the firm. Now he flies in engineers to fix such problems ahead of time and prevent a shutdown, he stated.

Aker BP employed a solution from SparkCognition of Austin, Texas.

Partnerships are forming throughout the industry. Exxon Mobil last year started a partnership with Microsoft to deploy AI programs to optimize operations in the West Texas Basin. The AI is needed to interpret data coming from millions of sensors that monitor Exxon refineries all over the globe. Total S.A., the French multinational oil and gas company, is partnering with Google to better interpret seismic data with the goal of better exploiting existing assets.

Advances in machine learning and the falling cost of data storage are factors in the move to AI in big oil. “When you mention data at this scale to data scientists, you can see them start salivating,” stated Sarah Karthigan, data science manager at ExxonMobil. The company has a database consisting of about five trillion data points. “The intent here is that we can run our plants more efficiently, more safely and potentially with fewer emissions.”

Sarah Karthigan, Data Science Manager, Exxon Mobil
With the price of oil low, oil and gas companies are looking for efficiencies. Deployment of AI in upstream operations could yield savings in capital and operating expenses of $100 billion to $1 trillion by 20205, according to a 2018 report by PwC.  .... "

Not Self Driving Cars, But Robots that Could Drive cars?

Interesting and bold challenge.    Build a robot that would autonomously drive cars.   With vision, decision making, interaction with car systems.   Adapting to various kinds of cars.   Not necessarily an android, looking like a person, with arms and legs and head and eyes.   But the equivalent. 

Its taking too long to get car based autonomy, so will this be quicker, cheaper, more adaptable?  But even having a robot navigate complex spaces, like the home, is also hard.     Lance Eliot discusses and poses an instructive list of positive and negatives.   See also his Forbes column:  https://forbes.com/sites/lanceeliot/   :

What If We Made A Robot That Could Drive Autonomously?  By Lance Eliot, the AI Trends Insider

There must be a better way, some lament.

It is taking too long, some say, and we need to try a different alternative.

What are those comments referring to?

They are referring to the efforts underway for the development of AI-based self-driving driverless autonomous cars.

There are currently billions upon billions of dollars being expended towards trying to design, develop, build, and field a true self-driving car.

For true self-driving cars, the AI drives the car entirely on its own without any human assistance during the driving task. These driverless cars are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered Level 2 and Level 3.

There is not as yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).

So far, thousands of automotive engineers and AI developers have been toiling away at trying to invent a true self-driving car.

Earlier claims that progress would be fast and sweet have shown to be over-hyped and unattainable.

If you consider this to be a vexing problem, and if you have a smarmy person that you know, they might ponder the matter and offer a seemingly out-of-the-box proposition.

Here’s the bold idea: Rather than trying to build a self-driving car, why not instead just make a robot that can drive?   ... " 

Algorithmic Design for Building

Algorithms both generating data and using data for the design and construction of buildings.   Like managing pertinent metadata.

Algorithms are designing better buildings

Silvio Carta in The Conversation
Head of Art and Design, University of Hertfordshire

When giant blobs began appearing on city skylines around the world in the late 1980s and 1990s, it marked not an alien invasion but the impact of computers on the practice of building design.

Thanks to computer-aided design (CAD), architects were able to experiment with new organic forms, free from the restraints of slide rules and protractors. The result was famous curvy buildings such as Frank Gehry’s Guggenheim Museum in Bilbao and Future Systems’ Selfridges Department Store in Birmingham.

Today, computers are poised to change buildings once again, this time with algorithms that can inform, refine and even create new designs. Even weirder shapes are just the start: algorithms can now work out the best ways to lay out rooms, construct the buildings and even change them over time to meet users’ needs. In this way, algorithms are giving architects a whole new toolbox with which to realise and improve their ideas.

At a basic level, algorithms can be a powerful tool for providing exhaustive information for the design, construction and use of a building. Building information modelling uses comprehensive software to standardise and share data from across architecture, engineering and construction that used to be held separately. This means everyone involved in a building’s genesis, from clients to contractors, can work together on the same 3D model seamlessly.

More recently, new tools have begun to combine this kind of information with algorithms to automate and optimise aspects of the building process. This ranges from interpreting regulations and providing calculations for structural evaluations to making procurement more precise. .... "

Value Creation

Good piece.   Take care to carefully measure the value.

The Value of Value creation
By Marc Goedhart and Tim Koller, McKinsey Quarterly  ( link to PDF)

Challenges such as globalization, climate change, income inequality, and the growing power of technology titans have shaken public confidence in large corporations. In an annual Gallup poll, more than one in three of those surveyed express little or no confidence in big business—seven percentage points worse than two decades ago.1 Politicians and commentators push for more regulation and fundamental changes in corporate governance. Some have gone so far as to argue that “capitalism is destroying the earth.”2

This is hardly the first time that the system in which value creation takes place has come under fire. At the turn of the 20th century in the United States, fears about the growing power of business combinations raised questions that led to more rigorous enforcement of antitrust laws. The Great Depression of the 1930s was another such moment, when prolonged unemployment undermined confidence in the ability of the capitalist system to mobilize resources, leading to a range of new policies in democracies around the world.

Today’s critique includes a call on companies to include a broader set of stakeholders in their decision making, beyond just their shareholders. It’s a view that has long been influential in continental Europe, where it is frequently embedded in corporate-governance structures. The approach is gaining traction in the United States, as well, with the emergence of public-benefit corporations, which explicitly empower directors to take into account the interests of constituencies other than shareholders. .... ' 

Saturday, June 20, 2020

Sony Aibo Updated, will Greet You at the Door

Seems to be only assistant system left that makes a claim to being mobile and family friendly.   Home oriented.   With elements of a home pet emphasized.   We visited their lab early on, and that was one of the claims.  But it never seems to have reached that.  Notably similar in aims to the Kuri, now defunct.



Sony's Aibo robot will now greet you at the front door.  So, so adorable.
Nick Summers, @nisummers in Engadget

Sony’s robotic Aibo pup continues to learn new tricks. Thanks to a new software update, the android companion will now predict when you come home and sit patiently at the front door. Or that’s the idea, anyway. According to Sony’s website, you’ll first need to assign a meeting place — the entrance to your front home — by saying a phrase like “this is where you should go.” Aibo should then lower its head and ‘sniff’ the ground to indicate that it’s storing the location. If the process is successful, a door icon should appear on the map located inside the companion app.  ... "

Claims of over 100K sold in various forms, costing as much as  $2,900 each. There had seemed to be some indications it might be abandoned,  but this recent software update would say otherwise.   Glad the general idea continues, it has a place. 

Some detail in the Wikipedia:  https://en.wikipedia.org/wiki/AIBO

Patient Survey for Telemedicine

Useful to get some real experience from this event:

Doctor.com's Patient Survey Reveals Surprising Trends About Telemedicine Adoption Amid Reopenings
PR Newswire
June 16, 2020

A nationwide survey of 1,800 patients by healthcare marketing automation company Doctor.com found evidence of massive telemedicine adoption during the current pandemic, as well as growing demand for telemedicine services in the coming years. Most (83%) of the surveyed patients expect to use telemedicine after the pandemic, 55% are willing to use telemedicine to see new doctors, and 69% said "easy-to-use technology" would help them decide to make a telemedicine appointment. Moreover, 71% would consider using telemedicine services now, while 83% are likely to use such services after the pandemic. Doctor.com CEO Andrei Zimiles said, "As telemedicine becomes part of a 'new normal,' it is critical that providers begin shifting their long-term care strategies to incorporate virtual care and meet patients' rapidly evolving expectations for this channel."

High Quality Images of Moving Objects

I recall having to solve this problem for diagnosing from images of manufacturing machine parts.

Capturing Moving Subjects in Still-Life Quality
EPFL News (Switzerland)
June 18, 2020

Researchers at the Swiss Federal Institute of Technology in Lausanne (EPFL) Advanced Quantum Architecture Laboratory and the University of Wisconsin-Madison (UW-Madison) Wision Laboratory have developed a method for capturing extremely clear images of moving subjects. UW-Madison's Mohit Gupta borrowed EPFL's SwissSPAD camera, which generates two-dimensional binary images at a resolution of 512 x 512 pixels. EPFL's Edoardo Charbon said SwissSPAD captures 100,000 binary images per second, as an algorithm corrects for variations; the researchers built a high-definition image of a moving subject by combining these photos. The team aims to repeat the experiment with the MegaX camera, which Charbon said "is similar to SwissSPAD in many ways; it's also a depth-sensing camera, thus it can generate [three-dimensional] images."   ... " 

Upcoming ISSIP/CSGTalk: Robots in the Pandemic

Correspondent Jim Spohrer talks about robot tech in our pandemic futures.    I will post about this and point to the transcript.

ISSIP Speaker Series: COVID-19 & Future of Work and Learning

Speaker: Jim Spohrer, Director, Cognitive Open Tech, IBM
Title: How will COVID-19 affect the need for and use of robots in a service world with less physical contact?
Date & Time: June 24, 2020, 12:30-1:00 PM US Pacific Time, (on Zoom, info below)

Abstract: As AI and robotics come to the service world, including retail, hospitality, education, healthcare, and government, some jobs will go away, some new jobs will be created, and the income required for a family to thrive might be lessened.   In this creative session participants will be asked to engage in discussing three scenarios below – and the wicked problem of the bespoke impact on livelihood and jobs, which is creating uncertainty and concerns.   The groups will then report back on which scenarios they find more desirable. Click here for more details about this session.

More on this talk and background:  http://www.issip.org/about-issip/community/covid-19-working-group/

Recorded talk, slides:   https://youtu.be/RchxIKum_tI 

See also:  http://www.issip.org/   The International Society of Service Innovation Professionals, ISSIP 

ISSIP Newsletter: https://mailchi.mp/2f401b893caa/issip-june-2020-newsletter?e=78b83a31fb

Google Assistant and Duplex can Listen Better

This made me think a while about the implications.  It does not necessarily imply a loss of privacy, these are the actions-skills that we accept as extensions to an assistant.   Commands.   So would seem it would be very useful to office voice actions ....   So I could say 'Copy This',  and what follow would be recorded.    Or  'Translate this ... And the following would be translated.     We know that Google can do that ... just need better context operation.    As long as we can control the results.  Lately too have been impressed how the assistant can better handle my mis-stated commands.  But should that make us fear for privacy?

Google Assistant actions can now continuously listen for specific words
Kyle Wiggers@KYLE_L_WIGGERS in Venturebeat

Google today detailed new tools for partners developing on Google Assistant, its voice platform used by over 500 million people monthly in 30 languages across 90 countries. Actions Builder, a web-based integrated development environment (IDE), provides a graphical interface to show conversation flows and support debugging and training data orchestration. Continuous Match Mode allows Google Assistant to respond immediately to a user’s speech by recognizing specified words and phrases. And AMP-compliant content on smart displays like Nest Hub Max speeds up browsing via the web.

Google also revealed that Duplex, its AI chat agent that can arrange appointments over the phone, has been used to update over half a million business listings in Google Search and Google Maps to date. Back in March, CEO Sundar Pichai said Google would use Duplex “where possible” to contact restaurants and businesses so it can accurately reflect hours, pick-up, and delivery information during the pandemic. The company subsequently expanded Duplex in a limited capacity to the U.K., Australia, Canada, and Spain, adding support for the Spanish language in the last instance.  ... "