As
it happened, a possibility to exhibit videos recorded with this new technology
appeared. The Swedish craft and design museum, Röhsska Museum, had an event
where alternative and practice based learning and teaching was in focus. It was
a perfect forum for testing our new technique on a real life audience. Now
everything had to happen in one week. Intense coding to finish the software,
journeys around the country to record some of the most skilled blacksmith and
more coding since a few small thigs turned out to be less than optimized when
running a sharp project. Still, both hardware and software proved surprisingly
stable. The cameras shut down sometimes but that was easily fixed by pulling
out and then reinserting their usb3 cable. Recording mode was otherwise running
flawless (on the second highest resolution), the only challenge was that the
cameras needed to stand between 0,7-1 meter away from the one recorded to get
enough sharp pixels. Playback mode was a little bit less perfect. Still almost
no bugs or unwanted side features, the ones discovered where easily dealt with
by turning of and then on the software, but the audience at Röhsska reacted on the
pixelated result. The cameras record beautiful point clouds from where they
stand but also less perfects points in their periphery. When we combine four
great sides into one body the four great sides bring eight peripheries with
them, resulting in bad quality pixles hanging in the air just outside of the
high quality point cloud. Otherwise, this new way of recording and documenting activity
is everything we could hope for. It is absolutely awesome to be able to put
something movable from our physical environment and put it into a digital one
without animation. We just turned the perspectives of what “virtual” reality is
to reality.
The awesome blacksmiths recorded are Bertil Pärmsten at Bräcke smedja, Therese
Engdahl at Therese smedja and Julius Pettersson at Manufaktursmide.
Multiple Intel RealSense cameras at once
A deep dive into the RealSense cameras from Intel has proven fruitful. Gunnar and Mikael have written new software in C++ parallel to extensive construction work in unity. Today the whole team engaged in a first recording of a proper VR-video clip recorded with 4 Intel RealSense cameras plus sound. Gustav Thane took up the hammer again and made his thing. It became clear that a more user-friendly interface for handling the files is needed.
Prior to this session Gunnar spent some time on making a rather intuitive method to calibrate the four cameras to each other with the hand controls in the VR environment. Mikael built a way to paste a 360 movie to the walls and roof of the space where the textured point cloud is moving around. But most of the time Mikael’s skills in programming Unity and Gunnar’s mastery of solving problems have been just the combo to push the project forward. A brand new computer had to be purchased too. The old one was kick-ass a year ago but had difficulties handling the heavy files and processor-draining recording and playback of the files. The playback of four simultaneous point clouds is rather heavy on the processor so we’ve been experimenting with using a decimation filter on the point cloud rendering, resulting in a better frame rate… but in the end a new GPU was our only choice (The new computer is has a RTX2080 and an intel core i7-9700 K and a proper SSD-drive). Today the depth sensors are recording on almost full capacity 848×480 pixels (full is 1280×720) whilst the texture is recorded in 1280×720. The reason we do not use full capacity for the depth sensor is because that would only give us 15 frames per second and 30 pixels seemed more attractive at the time… still do. Soon we are ready to do a proper rehearsal with actual audience.
