Now also on Github

All out there interested in exploiting the code developed in this project now have a full repository of all the code as well as a guide to find complementary code to build up the whole program, Bodyghost VR, by your self, in Unity. This is a great way to learn how we solved problems and did that calibration of hardware. But it is also an opportunity to develop, evolve and widen the whole consent of point cloud recording for VR. But do not forget, all code is licensed as a CC BY 4.0 license meaning that you can pick and use, change and sell all that we have done here, but you will always have to credit us as original creators. And when you build it, please include this splash image.

Just click the link and the code and instructions are there to examine and use: https://github.com/gustavthane/Bodyghost-VR-for-Unity

Bodyghost VR for free

This project have been financed by Konstnärsnämnden, The The Swedish Arts Grants Committee, Kulturbryggan. The ambition was to find a new format to exhibit bodily movements such as craft, dance or martial arts. This particular grant was directed towards people working with combinations of different cultural expressions. So we got our financing, and we are grateful for that The whole project this far is supported by Kulturbryggan. Now we are out of money… So now we will publish the stuff we have. The Grand was financed by public taxes so this is what we give back to the public. Our hope is that people will download use and keep developing this format for exhibitions.


The first software published is Bodyghost VR. This is the software we have used to play point cloud video recorded with Intel RealSense D415. It is used for a Virtual Reality experience currently exhibited at Husqvarna museum. You find the file at: http://www.konstsmedjan.se/Bodyghost/

Exhibition in Husqvarna museum

Tacit knowhow is now starting its launch. The first step is an exhibition at Husqvarna museum where the Bodyghost software developed within the project is exhibited in a HTC Vive headset. The museum is open weekdays between 10:00-15:00 and weekends 12:00-16:00. You will be invited to visit the old forge in Granshult where Gustav Thane is forging, the forge Bräcke smedja where Bertil Pärmsten is forging, the forge Therese smedja where Therese Engdahl is forging and a forge in Väring where Julius Pettersson is forging as a VR-experience. The volumetric video contain a full scale point cloud of the blacksmiths tapering a piece of heated metal on their anvils. The four clips are about 10 minutes in all. Go there and check it out if you have the chance. If not, you can set up a HTC-vive system wherever and wait for us to share the files recorded and the software to play from… in February  2020.

Photogrammetry in an old forge

Before the launch of the Husqvarna museum exhibition we finally made one further improvement. We built a proper 3d environment, using Autodesk ReCap Photo and a Structure Sensor by Occipital, Mark II. The Journey went to Granshults faktorismedja just north of Jönköping where we made a final recording with our own blacksmith Master Gustav Thane. The small forge, just under 5×5 meters was scanned as a complement to the volumetric recordings with the multiple Intel RealSense cameras. The point clouds generated where placed by the anvil in the scanned 3d-inviroment. This really heightened the immersiveness of the experience.

Two things you need to know if you want to try this at home. (1) The Structure Sensor use an Iphone or Ipad to operate, if there is no internet connected, it will not work. It pretends to work, make all the sounds and trigonometry visualization, but it don’t. So even if you are stranded in a 17th century forge in the middle of nowhere you need to set up a wi-fi or equivalent to make it work. (2) The Autodesk ReCap Photo make really nice photogrammetry of objects, and it can be used with a drone, but it do not give you a room. So we had to pretende to capture the anvil, but we did so from a distance of almost 2 meters, getting a whole lot of the floor and room in the same pictures. This made ReCap automatically construct almost the whole forge in relatively high resolution, (photogrammetry is a technique where you take a lot of photos, 100 or so, and a software construct a 3d model based on it).

Eventually we did paste a few details from the Structure Core but those cannot compare in resolution and quality with the photogrammetry of ReCap and a high resolution professional grade camera. The problem was that the ReCap model turned out to be an over 1 GB file. It looked amazing but to avoid lag we scaled it down to 20% of full quality making it almost as poor quality as the Stucture Core files. Another problem was that the ReCap  was focused on the anvil. The workbenches and tools lying further away in the forge became sort of morphed. So we used a 3d modeling software to simply paste workbenches and a piece of the roof into the model, scaling it to life size and making sure the floor laid in the right place.

It looks a lot better, and it is so much fun to be able to walk around in the forge, on a floor, looking at tools and stuff.

First public appearance

As it happened, a possibility to exhibit videos recorded with this new technology appeared. The Swedish craft and design museum, Röhsska Museum, had an event where alternative and practice based learning and teaching was in focus. It was a perfect forum for testing our new technique on a real life audience. Now everything had to happen in one week. Intense coding to finish the software, journeys around the country to record some of the most skilled blacksmith and more coding since a few small thigs turned out to be less than optimized when running a sharp project. Still, both hardware and software proved surprisingly stable. The cameras shut down sometimes but that was easily fixed by pulling out and then reinserting their usb3 cable. Recording mode was otherwise running flawless (on the second highest resolution), the only challenge was that the cameras needed to stand between 0,7-1 meter away from the one recorded to get enough sharp pixels. Playback mode was a little bit less perfect. Still almost no bugs or unwanted side features, the ones discovered where easily dealt with by turning of and then on the software, but the audience at Röhsska reacted on the pixelated result. The cameras record beautiful point clouds from where they stand but also less perfects points in their periphery. When we combine four great sides into one body the four great sides bring eight peripheries with them, resulting in bad quality pixles hanging in the air just outside of the high quality point cloud. Otherwise, this new way of recording and documenting activity is everything we could hope for. It is absolutely awesome to be able to put something movable from our physical environment and put it into a digital one without animation. We just turned the perspectives of what “virtual” reality is to reality.

The awesome blacksmiths recorded are Bertil Pärmsten at Bräcke smedja, Therese Engdahl at Therese smedja and Julius Pettersson at Manufaktursmide.

Multiple Intel RealSense cameras at once

A deep dive into the RealSense cameras from Intel has proven fruitful. Gunnar and Mikael have written new software in C++ parallel to extensive construction work in unity. Today the whole team engaged in a first recording of a proper VR-video clip recorded with 4 Intel RealSense cameras plus sound. Gustav Thane took up the hammer again and made his thing. It became clear that a more user-friendly interface for handling the files is needed.

Prior to this session Gunnar spent some time on making a rather intuitive method to calibrate the four cameras to each other with the hand controls in the VR environment. Mikael built a way to paste a 360 movie to the walls and roof of the space where the textured point cloud is moving around. But most of the time Mikael’s skills in programming Unity and Gunnar’s mastery of solving problems have been just the combo to push the project forward. A brand new computer had to be purchased too. The old one was kick-ass a year ago but had difficulties handling the heavy files and processor-draining recording and playback of the files. The playback of four simultaneous point clouds is rather heavy on the processor so we’ve been experimenting with using a decimation filter on the point cloud rendering, resulting in a better frame rate… but in the end a new GPU was our only choice (The new computer is has a RTX2080 and an intel core i7-9700 K and a proper SSD-drive). Today the depth sensors are recording on almost full capacity 848×480 pixels (full is 1280×720) whilst the texture is recorded in 1280×720. The reason we do not use full capacity for the depth sensor is because that would only give us 15 frames per second and 30 pixels seemed more attractive at the time… still do. Soon we are ready to do a proper rehearsal with actual audience.

Hot as hell

Gustav in VR-gear

Some warm winds blew in today’s investigation when we finally got to use some hot steel! Wow! Yes, you assumed correctly, we will try to measure the temperature of the steel when forging. Our blacksmith has the day in honor brought to us a forge since we were going to meet a measurement-guru from Termisk Systemteknik. Mr. Guru (who usually goes by the name of Claes Nelsson) brought some nice heat cameras with him to show us their almost magical properties. These heat cameras are sensing IR to measure the temperature. However, they need to have a few reference points as calibration of the temperature of the steel to give a correct measurement and visualization, so old SP has lent out their amazing self-built thermocouple data-logger together with a few thermocouples. Thank you!

Let’s start! The first critical point of measuring the temperature was to attach the heating element to the steel. How would we do this? Our blacksmith came as a rescuer and tempered his steel by cutting a score in it, which we then could use to attach the thermocouple to it. We needed to do this to measure the emissivity of the steel.

Emissivity of metals is a difficult area, which is currently being researched. All materials will emit electromagnetic radiation when heated, and this radiation varies with the temperature of the material. Further there is something called a black body. A black body is quite simply a physical body that is perfectly black, and thus will absorb all incoming radiation and not reflect anything. When such a body is heated it will, just like any other physical body, emit radiation in the form of electromagnetic waves, and this is then called “black body radiation”. But in reality there is no such thing as a perfectly black body. This is where the emissivity comes in handy. Because the emissivity is simply a material property that is used to describe how close a certain material is to having the properties of a black body, in the form of a fractional number between 0 and 1. And thanks to the physicist Max Planck there is an equation called Planck’s Radiation law, that describes the frequency distribution of electromagnetic waves that are emitted from a black body at a certain temperature in temperature equilibrium. So why is this important to us then? Well, because if we know the emissivity of a material AND the distribution of electromagnetic waves coming from that same material, we can then find the temperature using an IR-camera. And this time the emissivity was calculated by actually measuring the temperature using the thermocouples, something which was then fed into the software of the IR-camera to find the correct emissivity that made the readings on the screen of the IR-camera match our measurements.

So why did we go through all this trouble to measure the temperature of the steel? It is significant for us to measure the temperature and try to find a solution to how it should be visualized, as the temperature is an important parameter in the forge exercising. Unfortunately, we did not get convinced that this method was the right way for us as the results were too uncertain. A few problems that struck us when we used these cameras was that it did not give a constant value, and it was also a bit too expensive.

Today, a lot of things also happened in addition to testing the heat cameras. An attempt was made to program a algorithm to measure brightness using a regular camera (and thus somehow being able to extract the temperature from that reading), and we continued the work with the depth camera. We took a few steps forward and the journey progresses.

And from all of us to all of you a very merry christmas!

Argus-eyed

What has happened here lately?

We have continued our investigation of how to use depth cameras. This time we took the challenge to another level. Our geniuses had an attempt at connecting all five depth cameras. Five… How on earth did we think there? Good question, but the one who has not tested weird stuff will not continue to get involved in a fairytale tale! Of course, our evolving talents want to overcome the latest obstacles and challenges in the project’s technological development, and walk the narrow path that is this enchanted story, but this time unfortunately it did not go all the way.  The five cameras did not log the picture, and one of the cameras delivered only a still image. We suspect this is because there is too much data being sent, which exceeds the capacity of USB 2.1.

Worth noting is also that we tripped and fixed for about half a day, and since then we’ve had another half-day where we met our sister studio: RISE Interactive Umeå, to discuss the project. RISE Interactive Umeå have done and are doing a lot of projects that are of interest to us in Tacit knowhow and we hope that we can join forces in this project in the future.

In our video blog you can see that we are working on creating a 3D printed jig for the camera and the tracker. The jig is intended to facilitate the tracing of the camera’s position. At present, no final version has been produced, which means we continue to iterate on the prototype. This wasn’t a story of happily ever after and our story simply continues…

To look into the depth

The photo and experiment studio

After the last episode of technical trolling, our amazing team are set against a gigantic obstacle, where HTC vive constituted a significant resistance to the minds of Niels, Gunnar, and Mike, but through guidance and discussions with our Blacksmith, they will be able to find a new path that can lead them towards the goal.

This time we were set on using depth cameras, which helps us to measure the distance of an image with depth technology and active infrared (IR). Our depth camera uses depth sensors, an RGB sensor, and an infrared projector. We have an idea that several perspectives and angles would allow the camera to follow the movement and create a more accurate and detailed picture than we were able to achieve in our previous experiments. Something that should be noted is that the cameras’ updated application programming interface (API) made it easier for us to pair the two cameras. Hence, it was a great occasion to use two cameras this time. In this experiment, the cameras were used both in the physical room and in the 3D-room, which meant that our tech savages had to work with matching the positions in boh the 3D-room and the physical room. They successfully connected the cameras, HTC Vive and the trackers to a computer and to the programming software Unity. The HTC Vive and the trackers were a good aid to reproduce mobile position data, which made it easier for any of the cameras to be moved. The use of the tracking system enabled us to get a rough calibration and provided a basic calibration, but still a fine calibration was required to provide a true mirrored image in the 3D space. Part of the calibration of the camera was that they created a maximum area that the camera could use. The key word in this test was calibration, calibration, CALIBRATION!

Gunnar and Mike worked to calibrate the cameras manually both in the current position and with respect to the rotation. However, it is very difficult to get the calibration really good when doing it manually. In this case one needs to know exact positions in the physical space and the camera’s rotation. Something else that became problematic was the recording. The recording on both cameras did not start simultaneously, which caused a shift in the recorded image. One problem is also that the recorded files are very large and thus it becomes demanding to perform any form of editing. Hence, neither the recording nor the playback was optimal at this time.

Something that was spectacular however, was the visual experience where the viewer was able to meet his/ her external form. Like Narcissus caught in the 3d mirror image the viewer was filled with curiosity to the pictured self. Yes, I promise it was a splendid view and hard to be separated from this beautiful creation.