We are proud to tell you that the TNG Hardware Hacking Team finalized our new showcase “Beam me up – Holographic communication using the Microsoft Hololens”. Yesterday we premiered our talk concerned with the demo at the TNG Big TechDay 10. Using the application you can have a holographic telepresence experience. While one person is filmed by a 3D depth camera like the Intel RealSense or Microsoft Kinect, the other person can see a hologram of the first person within the Hololens glasses. When seeing the world this way, it’s a bit like the holograms you might know from Star Wars. We will now put some final polish on the showcase as well as on the talk. Once this is done we will – like with all our demos – publish a video and a technical article on the subject.
We are currently preparing for two of our next appearances including the Avatar Telepresence system using Nao and Kinect. This will be a first since we are using our demo at two places at the same time. We spent a lot of energy within the last few weeks in order to duplicate our showcase so that we can use it in two locations simultaneously.
The first one will take place at the IoT TechDay in Utrecht on 19th of April. Don’t miss the talk given by Thomas Endres and Thomas Reifenberger and see the Avatar in action. Visit our presentation from 11:30 to 12:15 in room “Polar”.
In parallel Florian Gather and Markus Spanier will showcase our Nao robot at the MuleSoft Connect conference in San Francisco. We will be there with our showcase at the welcome reception. Then we will have another appearance on 19th of April from noon to 13:30 at the developer lab and a third one on 20th of April from 16:30 to 17:30. There you can try the Avatar system for yourself.
We are currently preparing for two of our next talks about the Avatar Telepresence system using Nao and Kinect. The first one will take place at the Mobile Tech Conference in Munich on the 14th of March. Don’t miss us and see the Avatar in action. Visit our presentation from 10:00 to 11:00 in room “Forum 8”.
After that we will be at the Javaland Conference in Brühl near Cologne from the 28th to the 29th of March. We will give our telepresence there as well from 14:00 to 14:45 on Tuesday, 28th of March. Besides that, we will showcase five of our demos at the Javaland Innovation Lab. There you can try the Avatar system for yourself, but you can also see the world through the eyes of a Terminator using our AugmentedRift AR device. Besides that we will showcase some of our demos for the Microsoft Hololens mixed reality device. And you can try to shoot some parrots in our VR game “ParrotAttacks VR” – or with gestures via our gesture controlled browser game.
This February, we held two hackathons within the TNG premises. Both were held for e-fellows students as well as for IT professionals.
The first one on the 10th of February was about Intel IoT Hardware and IoT technology in general featuring teams using various sensors and actuators to control rovers, peer to peer networks, a.s.o.
The second one on the 25th of February was featuring Game Development using the Microsoft Hololens Mixed Reality device. People were building virtual pianos, Augmented Reality physics simulations, flower planting games, etc.
We had a lot of fun and enjoyed working with all those brilliant people. This was surely not the last hackathon that we organized.
The OOP Konferenz 2017 is over and we unpacked all our stuff again. It was really great! We were at the Intel booth, organized the OOP Maker Faire with more than eleven different showcases and had our talk. Thanks for joining us.
Next week we will be at the OOP conference in Munich ICM from Tuesday, 31st of January to Thursday, 2nd of February. It will be a pretty intense week. We are currently preparing for the buildup. There will be multiple occasions when you can talk to us and the other guys from TNG Technology Consulting.
We will have our biggest showcase yet demonstrating the last three years of showcase development done by us and our company at the OOP Maker Faire on Wednesday, 1st of February, 6 PM to 10 PM CET; there will be a total of 11 different showcases.
We will be at the Intel booth showcasing our Avatar telepresence robotics experience and some more showcases.
It will also be a premiere for our newest VR game “ParrotAttacks VR” – a virtual reality arcade bird shooter game. A blog post will follow soon. So make sure to stop by the Intel booth or visit the Maker Faire. We’re looking forward to seeing you.
Tomorrow afternoon we will present our talk “Avatar – Telepresence robotics with Nao and Kinect” at the online conference VR with the Best. We’re looking forward to the talk and the additional one-to-one session after the presentation. If you want to participate, you can still register. Our talk will be at 5:20 PM CET (11:20 AM EDT).
For the third time in a row Thomas Endres and Martin Förtsch were awarded with the Intel Top Software Innovator 2016 in Seattle at the Intel Software Innovator Summit. As Intel Top Software Innovator you have to train 1,000 software developers at minimum in a year.
On conferences like OOP, Heise Developer World at CeBIT, JavaLand and TNGs Technology Consulting Big TechDay (and many, many more) we trained more than 3,500 developers all over the world, mainly in Germany, the Untited States of America and the Netherlands.
Since our projects and visibility is continuously growing we don’t want to forget all the developers of the TNG hardware hacking team helping us to realize some of our ideas. Without them it would be hard to create awesome showcases all the time!
We don’t just build showcases, we create experiences.
We are currently preparing for our first talk about the Avatar project within the United States. Next week we will speak at the Nao World Congress 2016. If you want to join us, our presentation will take place on Thursday, 13th of October, at 01:15 PM in the NERVE center, Lowell, MA.
Here is what we are going to speak about:
Using the NAO Humanoid robot, virtual reality glasses, and 3D camera sensors you can experience the world through the eyes of a Nao robot and control Nao via gestures. TNG Technology Consulting has built a telepresence robotics system based on this robot, an Oculus Rift and a Kinect One. The presentation will show how easy it is to program Nao using Python or Java. The speakers will share some insights about the challenges they faced during its implementation. The history of telepresence robotics, current trends, and examples for real world fields of application will also be a focus of the presentation.
You are in a hurry or you just want to see a cool video showcasing a real-time telepresence robotics system which gives you an out-of-body experience without being interested in too many details of how it works? Have a look!
Synopsis
“Avatar” is an American science fiction film by James Cameron premiered in the year 2009. The film is about humans from earth colonizing the planet Pandora which is populated by the local tribe Na’vi. Na’vis are a humanoid species indigenous in Pandora. To create trust the humans developed a system which makes it possible to incarnate as a real Na’vi. In the film Captain Jake Sully (Sam Worthington) slipped into the role of such a native on Pandora to accomplish his mission. With this very special human technology he was able to look and feel like a real Na’vi.
“Fìswiräti, nga pelun molunge fìtseng?”
“This creature, why did you bring him here?”
The goal of “Project Avatar” was to build a project where you can have such an out-of-body experience using off-the-shelf hardware. The idea was to control a robot at another location using full body gesture control. The teleoperator on the the remote site is a Nao robot which is equipped with sensors to gather environmental data. Using feedback channels this data can be transferred back to the human controller.
The human operator should actually be able to see the world through the eyes of the Nao robot including the field of view in 3D. This is realizable by using RGB stereo cameras. Using tactile sensors the robot is able to send touch events over the feedback communication channel to the human controller. So the controller can actually feel sensor based events occurring in the environment at the robot’s site comparable to some kind of force-feedback.
The hardware hacking team of TNG Technology Consulting GmbH wanted to implement a prototypical version of an immersive telepresence robotics system within one day only.
Telepresence robotics
In the context of telepresence robotics there is a operator site and a remote site. The remote site will be controlled by a human operator who is located on the operator site. The main goal of telepresence robotics is to overcome the four barriers between the operator and the remote environment. The four barriers to overcome are…
Danger
A teleoperator should make actions at a place which is hostile to life of human beings (e.g. robots clearing a disaster beyond all expectations).
Distance
A teleoperator should be able to get controlled from a remote place using a communication channel (e.g. communication via satellites).
Matter
A teleoperator should be able to access biospheres which are hostile to life for human beings (e.g. deep-sea robotics).
Scale
A teleoperator should be able to work with robotos with much more precision and accuracy (e.g. for remote surgery) or with machines which are much larger and stronger than an average human beeing.
Multimodal telepresence robotics
To implement a telepresence robotics system which achieves the goal to experience an out-of-body experience there is the strong need to implement feedback channels. Using the communication channel in reverse, gathered sensor information can be transferred from the remote site back to the operator site. Such an essential feedback information would be a 3D visual display in terms of multimodal telepresence robotics. This can be realized with at least two RGB cameras attached to the teleoperator itself. The video signal needs to get streamed through the communication feedback channel without any delays to the human operator.
Requirements
To implement our telepresence robotics solution, we need to use special hardware. To point out which hardware to use the hardware hacking team of the TNG Technology Consulting GmbH needed to define some functional requirements. Our requirements for this project were:
The human operator controls the movements of the teleoperator at a remote site using gestures based on full body skeleton tracking
The human operator has visual display 3D feedback using a head mounted display
The human operators head movements will result in corresponding head movement of a teleoperated humanoid robot
The human operator will receive tactile feedback when teleoperator is touched on head
The data between the human controller and the teleoperator needs to get transferred without any cables
Signal delays should be minimal for a pure out-of-body experience with your own Avatar.
Hardware
We’ve identified different 3D camera sensors capable of doing skeleton tracking. Among others there are e.g. the Leap Motion, Microsoft Kinect and Intel® RealSense™. LeapMotion is perfectly suited for hand tracking. As we need full body skeleton tracking Intel® RealSense™ or Microsoft Kinect were in the shortlist. For a full article on different gesture control enabled 3D camera sensors you have a look at this article.
At the moment, the Intel RealSense SDK is capable of tracking the upper body. As we needed full body tracking to interpret a walking gesture metaphor we decided for the Microsoft Kinect 2. The implementation was done in way that we can easily implement our solution with Intel RealSense later on when full body skeleton tracking will be available in the future.
A typical head mounted display capable to show 3D vision is the Oculus Rift DK2. As we already had gathered some knowledge with these virtual reality headsets due to our “Augmented Rift Terminator Vision project” we just decided to use this device.
One of the well-known humanoid robots which are programmable in an easy way is the Nao robot of Softbanks robotics (formerly known as Aldebaran). You can easily implement movements using e.g. Python, C++ or Java. In 2015 over 5,000 units of Nao were sold into 50 countries worldwide. Thus an extremely active community is behind that product.
Demonstration video
Architecture Overview
On the operator site we are using a normal laptop computer. At the remote site the Nao robot manipulates the environment. The Kinect camera and the Oculus Rift are directly attached to the laptop. Using the WAMP communication protocol the data of these controlleres are sent to the Nao robot. A Kinect sensor tracks the whole body except the head movement of the human operator. To control the head movements of the Nao robot we are using the built-in accelerometer of the Oculus Rift since this has much better accuracy than tracking the human operators head with the Kinect sensor.
The data based on the WAMP protocol is first sent to a WAMP router named “Crossbar” which is running on a Raspberry Pi Rev. 3. The controller software is also located on the Raspberry. There the incoming data is transformed into appropriate control commands for the robot. These commands are then sent to the Nao using Wi-Fi. Nao is also sending data back to laptop, e.g. the current posture. The reverse communication channel is used for such information.
A stereo RGB camera connected to second single board computer (like Intel Edison or Intel Joule) is attached to the robot’s head using custom 3D printed glasses inspired by Futuramas “Bender”. The stereo camera captures the video stream using the open source software GStreamer. A UDP stream is used to transfer the stream back to the laptop. It is streamed into the Oculus Rift using some kind of DirectX wrapper named SharpDX.
Implementation details
Skeleton Tracking
Kinect
Gathering full body skeleton data from the Microsoft Kinect camera is very easy and needs only a few lines of code.
01 public void Init(KinectSensor kinectSensor) {
02 BodyFrameReader bodyReader = kinectSensor.BodyFrameSource.OpenReader();
03 bodies = new Body[kinectSensor.BodyFrameSource.BodyCount];
04
05 bodyReader.FrameArrived += BodyFrameArrived;
06 }
07
08 private void BodyFrameArrived(object sender,
09 BodyFrameArrivedEventArgs bodyFrameEvent) {
10 BodyFrameReference frameReference = bodyFrameEvent.FrameReference;
11 BodyFrame frame = frameReference.AcquireFrame();
12
13 using (frame) {
14 frame.GetAndRefreshBodyData(bodies);
15
16 foreach (var body in bodies) {
17 if (body.IsTracked) {
18 // example how to get Head joint position
19 Joint head = body.Joints[JointType.Head];
20
21 float x = head.Position.X;
22 float y = head.Position.Y;
23 float z = head.Position.Z;
24
25 // do some awesome stuff
26 }
27 }
28 }
29 }
At first you have to initialize your BodyFrameReader. As the Kinect sensor can track up to six bodies you should always be sure to send the skeleton data of only one body to the Nao robot. If not doing so the Nao will receive movement data of all recognized bodies which would look like a robot going insane!
When the Kinect receives a BodyFrame you are able to acquire the current frame to access skeleton data for the different recognized bodies. Using the frame you can call the method GetAndRefreshBodyData() which will refresh the skeleton data for all recognized bodies. Now you can access the so called Joints and their 3D coordinates.
Intel RealSense
When using the the Intel RealSense SDK the implementation is just as easy as with the Kinect 2.
01 // ptd is a PXCMPersonTrackingData instance
02 Int32 npersons = ptd.QueryNumberOfPeople();
03
04 for (Int32 i = 0; i < npersons; i++) {
05 // Retrieve the PersonTracking instance
06 PXCMPersonTrackingData.Person ptp = ptd.QueryPersonData(
07 PXCMFaceTrackingData.AccessOrder.ACCESS_ORDER_BY_ID, i);
08 PXCMPersonTrackingData.PersonJoint ptj = ptp.QuerySkeletonJoints();
09
10 // work on the tracking data
11 int njoints = ptj.QueryNumJoints();
12 PXCMPersonTrackingData.JointPoint[] joints = new PXCMPersonTrackingData[njoints];
13 ptj.QueryJoints(joints);
14
15 // do some awesome stuff
16 }
Nao movements
Setup
The Nao robot can easily be programmed using JavaScript, C++, Python and Java. In our solution we used Python bindings. At first we need to setup the Nao robot. To be able to control the Nao with Python we need to import the naoqi library which is available on the Nao website.
01 import sys
02 import almath
03 import time
04 from naoqi import ALProxy
05
06 PORT = 9559
07 IP = "ip.address.of.your.nao.robot"
08
09 #Load Modules for different tasks
10 speechProxy = ALProxy("ALTextToSpeech", IP, PORT) # Allows NAO to speak.
11 audioProxy = ALProxy("ALAudioPlayer", IP, PORT) # Allows NAO to play audio.
12 postureProxy = ALProxy("ALRobotPosture", IP, PORT) # Allows use of predefined postures.
13 motionProxy = ALProxy("ALMotion", IP, PORT) # Provides methods to make NAO move.
To make use of specific Nao features you need to load different modules. In the following code samples you will understand how to make use of those different modules.
Nao learns to speak
To let Nao speak you just have to load the ALTextToSpeechModule. Using the method say() the robot will start to speak the text loudly. There are different parameters available which can be used to modify the pitch or the speed of the text to be spoken. For many languages download packages are available at the Nao marketplace.
01 speechProxy.say("Hello Intel Developer Zone!")
Nao is a poser
Nao is capable to do a lot of different postures. Postures are predefined routines inside the Nao kernel and will result in a chain of Nao movements to reach a posture state. Nao can go to a posture from every possible current position. Switching between postures like “LyingBelly” and “StandInit” is not a problem as well.
With the following code snippet Nao will change his posture to “StandInit” followed by a “LyingBelly” posture. You can switch between all postures shown in the picture above easily.
As only using predefined postures will is pretty boring we now want to move the robot’s joints without using predefined macros. Nao has 25 degrees of freedom based on the joints of the human skeleton.
Let’s say we want to move the left and the right arm to a fixed position as shown in the picture above. For this we need to make use of the ALMotion module. Using the motionProxy we can move the different Nao joints to its defined angle i.e. position.
The method setAngles() expects an array of joints to move, angle data for those joints and the speed of the movement which should be between 0 (slow) and 1 (fast).
Conclusion
Using off-the-shelf hardware only the software consultant team of TNG Technology Consulting GmbH was able to realize a multimodal telepresence robotics system. Controlling the Nao with gestures was easy to implement within one day only. One of the major challenges are to process the massive amount of video data transferring it over a wireless connection. Using an Intel IoT Gateway and a special RouterBoard we were finally able to transfer the video signal and all other controller data with a delay which was adequate to a have a good out-of-body experience.
Authors
If you are interested in this showcase or a conference talk about this topic don’t hesitate to contact us!
FireP4j is a library for the JVM which allows to log to the FireBug console. Doing so, you can see the log output within the browser. You don't have to mess up your HTML code anymore.