In the latest installment of our Future of Work series, PBS NewsHour science correspondent Miles O’Brien visits MIT’s Interactive Robotics Laboratory to understand the “new species” of robots scientists are designing to work alongside humans safely. Though the devices often excel at repetitive tasks, will they be able to function just as well in dynamic environments, such as the faced-paced world of health care?
One of the big questions that’s being debated about the future of work is the extent to which robots, artificial intelligence and automation may further eliminate, add or change our jobs.
We’re going to spend the next couple of nights of our series exploring that idea.
Tonight, Miles O’Brien looks into whether we humans may find a better partnership with the bot next to us.
It’s the focus of our weekly story on the Leading Edge of science and technology.
Robots on the march, rising, not falling.
And if you watch the YouTube channel for robot innovator Boston Dynamics, you might conclude they are out to replace us, or even worse.
But at MIT, roboticist Julie Shah and her team are devising robots with enough artificial intelligence to collaborate with humans, cobots.
If we’re just going to design them to replace, we’re living in a very limited sphere of what this technology can do. And we can really open up the possibility by designing them as collaborators.
So, we do more together?
We can do more together.
That’s a relief, robots that march with us, on our side, right? But, first, do no harm, please.
So, Shah is making it safe for workers to get closer to the big, strong, fast, but not-so-smart robots that are already used on assembly lines all over the world.
We now have a new species, really, of inherently safe robots that can work right alongside people. They can bump into you, and not permanently harm you in any way. That’s a game-changer.
With the right sensors and tracking software, the robot slows or stops when a human is in harm’s way. And with artificial intelligence, it learns more efficient ways to avoid causing injury.
Collaboration? Not really. This is more like coexistence in close proximity. And on factory floors, more and more robots are emerging from behind protective barriers.
If you look at a real cutting-edge warehouse these days, you see this fascinating dance between people and robots.
Andrew McAfee is a principal research scientist at MIT. He studies how digital technologies are changing business, the economy, and society.
Robots bring the shelves up to the person and rotate it so that the right item is right in front of the person. And the person’s job is to reach in and, with their extremely dexterous hands, grab two of those things out and put them in a bin that’s going to go off and get shipped off to me somewhere down the road.
The robots are not yet capable of doing that, reach in, grab two, and put it away, as accurately and as quickly as a human being can.
Julie Shah and her team aim to take this to a new level, devising machines that are flexible, smart assistants, able to adapt, even anticipate what’s next, while their human co-workers do what they do best, thinking intuitively, creatively, innovating efficiencies.
In almost every setting where people are doing much of the work, there is little pieces of the work that can very easily be done by robots today. And the problem is not in enabling the robot to do those little pieces of work, but enabling the robot to integrate and work effectively with the person, so that they can accomplish the task together.
A world filled with smart, helpful robots has long been a science fiction dream.
Welcome to Altair IV, gentleman. I am to transport you to the residence.
But the real world is still far from the hope and the hype. Big ideas for revolutionizing the way we work with artificial intelligence and robots are just that: big ideas.
But Julie Shah is undeterred, taking her approach into a workplace where things are less scheduled and predictable, to say the least.
Hey, I would like to offer a recommendation.
The labor and delivery floor at a hospital, where the nursing supervisors make an air traffic controller’s job look easy.
Shah’s robot is programmed with enough artificial intelligence to help nurses decide how to assign rooms and personnel.
If we can off-load even just the simple decisions and free up the cognitive capacity of these nurses to handle the most difficult situations, we can significantly improve safety in hospitals.
At Massachusetts General Hospital, they are seeing if they can employ artificial intelligence in the operating room in real time for surgeons, like Ozanan Meireles, here is in the midst of a laparoscopic liver gastrectomy, stapling the stomach of a morbidly obese patient.
In the next room, sits his colleague, surgeon Daniel Hashimoto. They are part of a team developing software smart enough to offer advice to a surgeon during a complex operation. They have shown the software hundreds, and soon thousands, of videos of the same procedure, so the machine learns the sequence and patterns of success.
This artificial intelligence is compared to what is happening live in the O.R.
What’s it looking at is, OK, this frame followed another. What’s the probability that this frame is supposed to follow that? So, the red here marks areas where the computer has decided that that particular frame that it’s looking at with sort of high confidence belongs to a certain step.
One of the most important steps is the placement of the staples relative to a notch in the stomach called the incisura angularis. If the staples are too close, the patient will have trouble swallowing.
Let’s say if a novice surgeon, if a surgeon in a community area that is the only surgeon available and started this type of procedure gets too close to it, artificial intelligence through computer vision could actually give some, like, intuition to that surgeon and say, you might be better off go two or three centimeters away where you are.
Hashimoto envisions a day when surgeons can instantly draw on the latest and best advice from everywhere, a collective surgical consciousness, if you will, making decisions linked to patient outcomes years later.
It will take some time before anything like this is deployed. Among the thorny issues, it is often unclear to computer programmers just how machine learning software reaches its conclusions. It’s a black box.
You really need a human being, whether it’s a physician, whether it’s a programmer, whether it’s a lawyer, to look at the output and say, is this a valid sort of recommendation that we’re following?
Much as we love our machines, humans like to have humans in the loop, especially when our lives are on the line.
And, in general, when something bad happens to a human being, they kind of want a human being to blame, to be responsible for that. It’s just our wiring.
Machines and humans may be good co-workers for now, but as the robots grow increasingly smarter and more flexible, there may come a day when they don’t need a human partner at all.
Don’t believe me? Just watch.
For the “PBS NewsHour,” I’m Miles O’Brien in Boston.
A lot more rhythm than many of us have.
[…] control to a machine, like performing an entire surgery. We discussed those and other examples in my PBS NewsHour segment this […]
[…] week, I did a piece for the PBS NewsHour. The subject: the Future of Work. In it, we discussed the concept of cobots–that is to say, collaborative robots–and how these kinds of robots, along with artificial […]