Parrots On OOP conference 2017

February 26, 2017

The OOP Konferenz 2017 is over and we unpacked all our stuff again. It was really great! We were at the Intel booth, organized the OOP Maker Faire with more than eleven different showcases and had our talk. Thanks for joining us.

16472798_1885783984967368_5011445468879571341_n16463711_1884665981745835_4240288359024193239_o


Packing for the OOP conference

January 28, 2017

Next week we will be at the OOP conference in Munich ICM from Tuesday, 31st of January to Thursday, 2nd of February. It will be a pretty intense week. We are currently preparing for the buildup. There will be multiple occasions when you can talk to us and the other guys from TNG Technology Consulting.

  • We will have our biggest showcase yet demonstrating the last three years of showcase development done by us and our company at the OOP Maker Faire on Wednesday, 1st of February, 6 PM to 10 PM CET; there will be a total of 11 different showcases.
  • We will be at the Intel booth showcasing our Avatar telepresence robotics experience and some more showcases.
  • We will have a talk on Wednesday, 1st of February,9 AM CET in the track “Digital Transformation” concerning “Avatar – Telepräsenzrobotik mit Nao und Kinect“.

It will also be a premiere for our newest VR game “ParrotAttacks VR” – a virtual reality arcade bird shooter game. A blog post will follow soon. So make sure to stop by the Intel booth or visit the Maker Faire. We’re looking forward to seeing you.

oop-showcase-packing

 


Parrots On VR with the Best

December 3, 2016

Tomorrow afternoon we will present our talk “Avatar – Telepresence robotics with Nao and Kinect” at the online conference VR with the Best. We’re looking forward to the talk and the additional one-to-one session after the presentation. If you want to participate, you can still register. Our talk will be at 5:20 PM CET (11:20 AM EDT).

arvrwiththebest


Intel Top Software Innovator 2016

November 16, 2016

For the third time in a row Thomas Endres and Martin Förtsch were awarded with the Intel Top Software Innovator 2016 in Seattle at the Intel Software Innovator Summit. As Intel Top Software Innovator you have to train 1,000 software developers at minimum in a year.

On conferences like OOP, Heise Developer World at CeBIT, JavaLand and TNGs Technology Consulting Big TechDay (and many, many more) we trained more than 3,500 developers all over the world, mainly in Germany, the Untited States of America and the Netherlands.

Since our projects and visibility is continuously growing we don’t want to forget all the developers of the TNG hardware hacking team helping us to realize some of our ideas. Without them it would be hard to create awesome showcases all the time!

We don’t just build showcases, we create experiences.

15110926_10210665607226041_522539221343957897_o


Parrots On Nao World Congress

October 7, 2016

We are currently preparing for our first talk about the Avatar project within the United States. Next week we will speak at the Nao World Congress 2016. If you want to join us, our presentation will take place on Thursday, 13th of October, at 01:15 PM in the NERVE center, Lowell, MA.

Here is what we are going to speak about:

Using the NAO Humanoid robot, virtual reality glasses, and 3D camera sensors you can experience the world through the eyes of a Nao robot and control Nao via gestures. TNG Technology Consulting has built a telepresence robotics system based on this robot, an Oculus Rift and a Kinect One. The presentation will show how easy it is to program Nao using Python or Java. The speakers will share some insights about the challenges they faced during its implementation. The history of telepresence robotics, current trends, and examples for real world fields of application will also be a focus of the presentation.


Project Avatar – A gesture controlled fully immersive telepresence robotics system with Nao

September 16, 2016

project_avatarYou are in a hurry or you just want to see a cool video showcasing a real-time telepresence robotics system which gives you an out-of-body experience without being interested in too many details of how it works? Have a look!

Synopsis

“Avatar” is an American science fiction film by James Cameron premiered in the year 2009. The film is about humans from earth colonizing the planet Pandora which is populated by the local tribe Na’vi. Na’vis are a humanoid species indigenous in Pandora. To create trust the humans developed a system which makes it possible to incarnate as a real Na’vi. In the film Captain Jake Sully (Sam Worthington) slipped into the role of such a native on Pandora to accomplish his mission. With this very special human technology he was able to look and feel like a real Na’vi.

“Fìswiräti, nga pelun molunge fìtseng?”

“This creature, why did you bring him here?”

The goal of “Project Avatar” was to build a project where you can have such an out-of-body experience using off-the-shelf hardware. The idea was to control a robot at another location using full body gesture control. The teleoperator on the the remote site is a Nao robot which is equipped with sensors to gather environmental data. Using feedback channels this data can be transferred back to the human controller.

The human operator should actually be able to see the world through the eyes of the Nao robot including the field of view in 3D. This is realizable by using RGB stereo cameras. Using tactile sensors the robot is able to send touch events over the feedback communication channel to the human controller. So the controller can actually feel sensor based events occurring in the environment at the robot’s site comparable to some kind of force-feedback.

The hardware hacking team of TNG Technology Consulting GmbH wanted to implement a prototypical version of an immersive telepresence robotics system within one day only.

Telepresence robotics

In the context of telepresence robotics there is a operator site and a remote site. The remote site will be controlled by a human operator who is located on the operator site. The main goal of telepresence robotics is to overcome the four barriers between the operator and the remote environment. The four barriers to overcome are…

  1. Danger
    • A teleoperator should make actions at a place which is hostile to life of human beings (e.g. robots clearing a disaster beyond all expectations).
  2. Distance
    • A teleoperator should be able to get controlled from a remote place using a communication channel (e.g. communication via satellites).
  3. Matter
    • A teleoperator should be able to access biospheres which are hostile to life for human beings (e.g. deep-sea robotics).
  4. Scale
    • A teleoperator should be able to work with robotos with much more precision and accuracy (e.g. for remote surgery) or with machines which are much larger and stronger than an average human beeing.

Multimodal telepresence robotics

To implement a telepresence robotics system which achieves the goal to experience an out-of-body experience there is the strong need to implement feedback channels. Using the communication channel in reverse, gathered sensor information can be transferred from the remote site back to the operator site. Such an essential feedback information would be a 3D visual display in terms of multimodal telepresence robotics. This can be realized with at least two RGB cameras attached to the teleoperator itself. The video signal needs to get streamed through the communication feedback channel without any delays to the human operator.

Requirements

To implement our telepresence robotics solution, we need to use special hardware. To point out which hardware to use the hardware hacking team of the TNG Technology Consulting GmbH needed to define some functional requirements. Our requirements for this project were:

  1. The human operator controls the movements of the teleoperator at a remote site using gestures based on full body skeleton tracking
  2. The human operator has visual display 3D feedback using a head mounted display
  3. The human operators head movements will result in corresponding head movement of a teleoperated humanoid robot
  4. The human operator will receive tactile feedback when teleoperator is touched on head
  5. The data between the human controller and the teleoperator needs to get transferred without any cables
  6. Signal delays should be minimal for a pure out-of-body experience with your own Avatar.

Hardware

We’ve identified different 3D camera sensors capable of doing skeleton tracking. Among others there are e.g. the Leap Motion, Microsoft Kinect and Intel® RealSense™. LeapMotion is perfectly suited for hand tracking. As we need full body skeleton tracking Intel® RealSense™ or Microsoft Kinect were in the shortlist. For a full article on different gesture control enabled 3D camera sensors you have a look at this article.

At the moment, the Intel RealSense SDK is capable of tracking the upper body. As we needed full body tracking to interpret a walking gesture metaphor we decided for the Microsoft Kinect 2. The implementation was done in way that we can easily implement our solution with Intel RealSense later on when full body skeleton tracking will be available in the future.

A typical head mounted display capable to show 3D vision is the Oculus Rift DK2. As we already had gathered some knowledge with these virtual reality headsets due to our “Augmented Rift Terminator Vision project” we just decided to use this device.

One of the well-known humanoid robots which are programmable in an easy way is the Nao robot of Softbanks robotics (formerly known as Aldebaran). You can easily implement movements using e.g. Python, C++ or Java. In 2015 over 5,000 units of Nao were sold into 50 countries worldwide. Thus an extremely active community is behind that product.

 

Demonstration video

Architecture Overview

On the operator site we are using a normal laptop computer. At the remote site the Nao robot manipulates the environment. The Kinect camera and the Oculus Rift are directly attached to the laptop. Using the WAMP communication protocol the data of these controlleres are sent to the Nao robot. A Kinect sensor tracks the whole body except the head movement of the human operator. To control the head movements of the Nao robot we are using the built-in accelerometer of the Oculus Rift since this has much better accuracy than tracking the human operators head with the Kinect sensor.

The data based on the WAMP protocol is first sent to a WAMP router named “Crossbar” which is running on a Raspberry Pi Rev. 3. The controller software is also located on the Raspberry. There the incoming data is transformed into appropriate control commands for the robot. These commands are then sent to the Nao using Wi-Fi. Nao is also sending data back to laptop, e.g. the current posture. The reverse communication channel is used for such information.

A stereo RGB camera connected to second single board computer (like Intel Edison or Intel Joule) is attached to the robot’s head using custom 3D printed glasses inspired by Futuramas “Bender”. The stereo camera captures the video stream using the open source software GStreamer. A UDP stream is used to transfer the stream back to the laptop. It is streamed into the Oculus Rift using some kind of DirectX wrapper named SharpDX.

Implementation details

Skeleton Tracking

Kinect

Gathering full body skeleton data from the Microsoft Kinect camera is very easy and needs only a few lines of code.

01    public void Init(KinectSensor kinectSensor) {
02        BodyFrameReader bodyReader = kinectSensor.BodyFrameSource.OpenReader();
03        bodies = new Body[kinectSensor.BodyFrameSource.BodyCount];
04     
05        bodyReader.FrameArrived += BodyFrameArrived;
06    }
07     
08    private void BodyFrameArrived(object sender,
09                                BodyFrameArrivedEventArgs bodyFrameEvent) {
10        BodyFrameReference frameReference = bodyFrameEvent.FrameReference;
11        BodyFrame frame = frameReference.AcquireFrame();
12     
13        using (frame) {
14            frame.GetAndRefreshBodyData(bodies);
15             
16            foreach (var body in bodies) {
17                if (body.IsTracked) {
18                    // example how to get Head joint position
19                    Joint head = body.Joints[JointType.Head];
20             
21                    float x = head.Position.X;
22                    float y = head.Position.Y;
23                    float z = head.Position.Z;
24     
25                    // do some awesome stuff
26                }
27            }
28        }
29    }

At first you have to initialize your BodyFrameReader. As the Kinect sensor can track up to six bodies you should always be sure to send the skeleton data of only one body to the Nao robot. If not doing so the Nao will receive movement data of all recognized bodies which would look like a robot going insane!

When the Kinect receives a BodyFrame you are able to acquire the current frame to access skeleton data for the different recognized bodies. Using the frame you can call the method GetAndRefreshBodyData() which will refresh the skeleton data for all recognized bodies. Now you can access the so called Joints and their 3D coordinates.

Intel RealSense

When using the the Intel RealSense SDK the implementation is just as easy as with the Kinect 2.

01    // ptd is a PXCMPersonTrackingData instance
02    Int32 npersons = ptd.QueryNumberOfPeople();
03     
04    for (Int32 i = 0; i < npersons; i++) {
05        // Retrieve the PersonTracking instance
06        PXCMPersonTrackingData.Person ptp = ptd.QueryPersonData(
07                   PXCMFaceTrackingData.AccessOrder.ACCESS_ORDER_BY_ID, i);
08        PXCMPersonTrackingData.PersonJoint ptj = ptp.QuerySkeletonJoints();
09     
10        // work on the tracking data
11        int njoints = ptj.QueryNumJoints();
12        PXCMPersonTrackingData.JointPoint[] joints = new PXCMPersonTrackingData[njoints];
13        ptj.QueryJoints(joints);
14     
15        // do some awesome stuff
16    }

Nao movements

Setup

The Nao robot can easily be programmed using JavaScript, C++, Python and Java. In our solution we used Python bindings. At first we need to setup the Nao robot. To be able to control the Nao with Python we need to import the naoqi library which is available on the Nao website.

01    import sys
02    import almath
03    import time
04    from naoqi import ALProxy
05     
06    PORT = 9559
07    IP = "ip.address.of.your.nao.robot"
08     
09    #Load Modules for different tasks
10    speechProxy = ALProxy("ALTextToSpeech", IP, PORT) # Allows NAO to speak.
11    audioProxy = ALProxy("ALAudioPlayer", IP, PORT) # Allows NAO to play audio.
12    postureProxy = ALProxy("ALRobotPosture", IP, PORT) # Allows use of predefined postures.
13    motionProxy = ALProxy("ALMotion", IP, PORT) # Provides methods to make NAO move.

To make use of specific Nao features you need to load different modules. In the following code samples you will understand how to make use of those different modules.

Nao learns to speak

To let Nao speak you just have to load the ALTextToSpeechModule. Using the method say() the robot will start to speak the text loudly. There are different parameters available which can be used to modify the pitch or the speed of the text to be spoken. For many languages download packages are available at the Nao marketplace.

01    speechProxy.say("Hello Intel Developer Zone!")

Nao is a poser

Nao is capable to do a lot of different postures. Postures are predefined routines inside the Nao kernel and will result in a chain of Nao movements to reach a posture state. Nao can go to a posture from every possible current position. Switching between postures like “LyingBelly” and “StandInit” is not a problem as well.

Nao Postures

With the following code snippet Nao will change his posture to “StandInit” followed by a “LyingBelly” posture. You can switch between all postures shown in the picture above easily.

1    postureProxy.goToPosture("StandInit", 1)
2    time.sleep(1.0)
3    postureProxy.goToPosture("LyingBelly", 1)

Nao is moving

As only using predefined postures will is pretty boring we now want to move the robot’s joints without using predefined macros. Nao has 25 degrees of freedom based on the joints of the human skeleton.

Nao Evolution 5 has 25 Degrees of Freedom

Let’s say we want to move the left and the right arm to a fixed position as shown in the picture above. For this we need to make use of the ALMotion module. Using the motionProxy we can move the different Nao joints to its defined angle i.e. position.

1    leftJoints = ["LShoulderRoll", "LShoulderPitch"]
2    rightJoints = ["RShoulderRoll", "RShoulderPitch"]
3    leftAngles = [2.0, 0.6]
4    rightAngles = [-2.0, 0.6]
5     
6    motionProxy.setAngles(leftJoints, leftAngles, 0.5)
7    motionProxy.setAngles(rightJoints, rightAngles, 0.5)

The method setAngles() expects an array of joints to move, angle data for those joints and the speed of the movement which should be between 0 (slow) and 1 (fast).

Conclusion

Using off-the-shelf hardware only the software consultant team of TNG Technology Consulting GmbH was able to realize a multimodal telepresence robotics system. Controlling the Nao with gestures was easy to implement within one day only. One of the major challenges are to process the massive amount of video data transferring it over a wireless connection. Using an Intel IoT Gateway and a special RouterBoard we were finally able to transfer the video signal and all other controller data with a delay which was adequate to a have a good out-of-body experience.

Authors

If you are interested in this showcase or a conference talk about this topic don’t hesitate to contact us!

 

“Ma sempul, oel ngati kameie.”

“Father, I see you.”


Parrots On Mobile Tech Con and Nao Congress

September 9, 2016

We are currently preparing for one of our last talks about the AugmentedRift Terminator Vision at the Mobile Tech Conference in Berlin on the 14th of September. Don’t miss us and see the AugmentedRift in action. Visit our presentation from 12:15 to 14:15 in room “Music Hall 4”.

From now on we are focusing our attention on our new talk “Avatar” about telepresence robotics with Oculus Rift and Kinect (and RealSense coming soon). We will have our first international talk about the topic at the Nao Congress 2016 in Lowell (NERVE center of the University of Massachusetts) near Boston on the 13th of October. We will also showcase our work there.

nao.jpg


Telepresence robotics at IoT Agenda

August 2, 2016

Check out our newest project about our Avatar project -combining Softbank Robotics Nao, Microsoft Kinect and Oculus Rift to build a telepresence robotics system. It is now available on TechTarget‘s IoT Agenda blog.

Telepresence robotics with Nao and Kinect

We will soon add another big blog post about this project here. Until then, have fun reading this post.


Avatar project – photos from the set

June 25, 2016

We are currently creating a new video for our newest project with TNG Technology Consulting, a telepresence robotic system based on the Softbanks NAO robot and the Kinect One as well as Oculus Rift. Here are a few photos from the set. The video itself will follow shortly.

13483353_1788407958038305_1738670278264311204_o


OOP 2016 Keynote now available on YouTube

May 30, 2016

It is the very first time that we publish a whole German spoken video but this time it was a premiere for us to have the OOP conference Keynote. The OOP has had its 25th anniversary and we held our keynote “See like a Terminator – Augmented Reality with Oculus Rift”. Enjoy!

In the meantime we are working hard on our new talk “Avatar – Telepresencerobotics with Nao and Kinect”. Premiere is at TNG Technology Consulting GmbH Big Techday on 3rd of June 2016 in Munich! We hope to present this talk in different languages on conferences all over the world again as it was the case with the “See like a Terminator” talk!