Regard3d

I’m trying out Regard3d, an open-source photogrammetry tool. A couple of items, memo-to-self style of thing:

    • its database does not have cellphone cameras in it. Had to google around to find the details on my particular phone
    • its database is this: https://github.com/openMVG/CameraSensorSizeDatabase 
    • just had to find where it was on my machine, and then make an entry for my phone. I’m still not sure whether I got the correct ‘width’ dimension – running with this. 
    • nb don’t do this with excel – excel does weird things to csv files, including hidden characters and so on which will cause Regard to not recognize your new database entry. Use Sublime Text or another text editor to make any changes. You can double click on an image in the imageset list inside Regard and add the relevant info one pic at a time, but this didn’t work for me.
    • I took the images with Scann3d, which made a great model out of them. But its pricing model doesn’t let me get the model out. So, found the folder on the phone with the images, uploaded to google drive, then downloaded. (Another nice thing about Scann3d is when you’re taking pictures, it has an on-screen red-dot/green-dot thingy that lets you know when you’re getting good overlap.)
    • Once I had the images on my machine, I needed to add exif metadata re focal length.  Downloaded, installed, exiftool. Command:  exiftool -FocalLength="3.97" *.jpg
    • In Regard3d, loaded the picture set in.
    • The next stages were a bit finicky (tutorial) – just clicking the obvious button would give an error, but if I had one of the image files selected in the dialogue box, all would work.
    • here’s a shot of the process in…erm… process…

  • Console would shout ‘error! error!’ from time to time, yet all continued to work…

I’m pretty sure I saw an ‘export to meshlab’ button go by at some point… but at any rate, at the end of the process I have a model in .ply and .obj!  (ah, found it: it’s one of the options when you’re ready to create the surface). All in all, a nice piece of software.

 

Posted in 3d

Working out the kinks in a VisualSFM via Docker workflow

Not these kinks.

VSFM, for those who’ve tried it, is a right huge pain in the arse to install. Ryan Bauman has done us all a huge favour by dockerizing it. His explanation of this is here – and once you’ve figured out some of the kinks, this is much easier way of working with it.

Ah yes, the kinks.

First of all, before we go any further, why would you want to do this? Isn’t 123D Catch enough? It is certainly easier, I grant you that. And it does a pretty good job. But structure-from-motion applications each approach the job differently – Ryan does a comparison here on the same objects. Some of those applications are very expensive indeed. VSFM is free to use, and can be called from the command line, and with care and practice one can get very good results. (What really caught everyone’s eye on twitter the other day was Ryan’s workflow for generating 3d objects from found drone aerial footage. HOW COOL IS THAT.). So I set out to replicate it.

First things first: you need to go to Docker and install it (here is a post wherein I futz with Docker to run Rstudio).

Now, Ryan’s container (that we will use in a moment) also comes with the handy youtube-dl for grabbing youtube videos, and another package for manipulating and cutting stills out of that video.  What follows are my notes to myself (in which I sometimes copy-and-pasted from others’ posts, to remind me what I was trying to do) as I work through the first part of Ryan’s workflow – from downloading the video to generating the point cloud. The meshlab texturing stuff will be a follow-up post.

1. Initialize and run boot2docker from the command line, creating a new Boot2Docker VM.

$ boot2docker init

This creates a new virtual machine. You only need to run this command once.

Start the boot2docker VM.

$ boot2docker start

Set the environment variables in your shell do the following:

$ eval "$(boot2docker shellinit)"

Then this:

$ docker run -i -t ryanfb/visualsfm /bin/bash

first time, will take a long time to download everything you need. This is Ryan’s container – the next time you go to do this, it’ll spin up very quickly indeed (one of the advantages of Docker; it’s a virtual machine with just the bits you need!) Then:

$ youtube-dl 'https://www.youtube.com/watch?v=3v-wvbNiZGY'

downloads a file from youtube called:
The Red Church Dating to the late 5thearly 6th century-3v-wvbNiZGY.mp4

let’s rename that:

$ mv 'The Red Church  Dating to the late 5thearly 6th century-3v-wvbNiZGY.mp4' redchurch.mp4

now let’s create a new directory for it:

$ mkdir redchurch

and move the mp4 file into it:

$ mv redchurch.mp4 redchurch

Ok, so now we split it into frames:

$ avconv -i redchurch.mp4 -r 1/1 -qscale:v 1 redchurch_%08d.jpg

(note that in the original post by Ryan, he was using ffmpeg; the docker container uses this alternative)

And then we go up a level

$ cd ..

and run some vsfm on it:

$ VisualSFM sfm+pairs+pmvs ~/redchurch redchurch.nvm @8

This part took nearly three hours on my machine.

ASIDE: now, I had to increase the available memory for VSFM to make it work; otherwise I was getting a ‘segmentation error’ at this step. To do this, first I found the .boot2docker folder by hitting control as I clicked on the finder, and then ‘go to’ /users/[user]/.boot2docker. I opened a new terminal there, and made a new file called ‘profile’ (no extension) with the following info

#disk image size in MB
DiskSize = 20000

# VM memory size in MB
Memory = 7168

I made the file by typing vi profile at the terminal, then typed in the info; then escape to stop editing and :save profile to save the file and close it.

Now to get stuff out (and this post was most helpful)

we need to open another terminal window, start docker there, and ask Docker to give us the id of the container that is running, so we can cp (copy) files out of it:

$ docker ps

There will we a randomly generated ‘short name’ for your container; the short id will be the same as at the prompt in your terminal where the vsfm is running; eg in my case: root@031e72dfb1de:~#

Then we need to get the full container id:

$ docker inspect -f   '{{.Id}}'  SHORT_CONTAINER_ID-or-CONTAINER_NAME

example (drawn from this post):

$ docker ps

CONTAINER ID      IMAGE    COMMAND       CREATED      STATUS       PORTS        NAMES

d8e703d7e303   solidleon/ssh:latest      /usr/sbin/sshd -D                      cranky_pare

$ docker inspect -f   '{{.Id}}' cranky_pare

You will get a ridiculously long string. Copy & paste it somewhere handy. On my machine, it’s:
031e72dfb1de9b4e61704596a7378dd35b0bd282beb9dd2fa55805472e511246

Then, in your other terminal (the one NOT running vsfm, but has docker running in it), we do:

$ docker cp <containerid>:path-to-file useful-location-on-your-machine

In my case, the command looks like this:

shawngraham$ docker cp 031e72dfb1de9b4e61704596a7378dd35b0bd282beb9dd2fa55805472e511246:root/redchurch.1.ply ~/shawngraham/dockerific/

update: turns out you can use the short id, in this case, 031e72dfb1de:root etc and it’ll work just fine.

(dockerific being the folder I made for this occasion)

~oOo~

Tomorrow, I’ll write up the kinks in the meshlab part of this workflow. Thanks again to Ryan for a brilliant piece of work!

~oOo~

update july 29

ok, let’s make life a bit easier for ourselves, in terms of getting stuff into and out of the docker container. Let’s create a folder that we can use as a kind of in-out tray. I’ll create a folder on my file system, at /user/shawngraham/dockerific

Then, when I am ready to run the container, we’ll mount that folder to the tmp folder in the container, like so:

$ docker run -i -t -v /Users/shawngraham/dockerific/:/tmp/ ryanfb/visualsfm /bin/bash

Now, anything we put in the tmp folder will turn up in dockerific, and vice versa. NB, things might get overwritten, so once something turns up in dockerific that I want to keep, I move it into another folder safe from the container. Anyway, when I want to get things out of the docker container, I can just CoPy <file> <to this location>

cp output.ply ~/tmp/

…and then grab it from the finder to move it somewhere safe.

historical maps into Unity3d

This should work.

Say there’s a historical map that you want to digitize.  It may or may not have contour lines on it, but there is some indication of the topography (hatching or shading or what not). Say you wanted to digitize it such that a person could explore its conception of geography from a first person perspective.

Here’s a workflow for making that happen.

Some time ago, the folks at the NYPL put together a tutorial explaining how to turn such a map into a minecraft world. So let’s do the first part of their tutorial. In essence, what we do is take the georectified map (which you could georectify using something like the Harvard Map Warper), load that into QGIS, add elevation points, generate a surface from that elevation, turn it into grayscale, export that image, convert to raw format, import into Unity3d.

Easy peasy.

For the first part, we follow the NYPL:

Requirements

QGIS 2.2.0 ( http://qgis.org )

  • Activate Contour plugin
  • Activate GRASS plugin if not already activated

A map image to work from

  • We used a geo-rectified TIFF exported from this map but any high rez scan of a map with elevation data and features will suffice.

Process:

Layer > Add Raster Layer > [select rectified tiff]

  • Repeat for each tiff to be analyzed

Layer > New > New Shapefile Layer

  • Type: Point
  • New Attribute: add ‘elevation’ type whole number
  • remove id

Contour (plugin)

  • Vector Layer: choose points layer just created
  • Data field: elevation
  • Number: at least 20 (maybe.. number of distinct elevations + 2)
  • Layer name: default is fine

Export and import contours as vector layer:

  • right click save (e.g. port-washington-contours.shp)
  • May report error like “Only 19 of 20 features written.” Doesn’t seem to matter much

Layer > Add Vector Layer > [add .shp layer just exported]

Edit Current Grass Region (to reduce rendering time)

  • clip to minimal lat longs

Open Grass Tools

  • Modules List: Select “v.in.ogr.qgis”
  • Select recently added contours layer
  • Run, View output, and close

Open Grass Tools

  • Modules List: Select “v.to.rast.attr”
  • Name of input vector map: (layer just generated)
  • Attribute field: elevation
  • Run, View output, and close

Open Grass Tools

  • Modules List: Select “r.surf.contour”
  • Name of existing raster map containing colors: (layer just generated)
  • Run (will take a while), View output, and close

Hide points and contours (and anything else above bw elevation image) Project > Save as Image

You may want to create a cropped version of the result to remove un-analyzed/messy edges

As I noted a while ago, there are some “hidden, tacit bits [concerning] installing the Contour plugin, and working with GRASS tools (especially the bit about ‘editing the current grass region’, which always is fiddly, I find).”  Unhelpfully, I didn’t write down what these were.

Anyway, now that you have a grayscale image, open it in Gimp (or Photoshop; if you do have photoshop go watch this video and you’re done.).

For those of us without photoshop, this next bit comes from the addendum to a previous post of mine:L

    1. open the grayscale image in Gimp.
    2. resized the image as power of 2 + 1 (*shrug* everything indicates this is what you do, with unity); in this case I chose 1025.
    3. save as file type RAW. IMPORTANT: in the dialogue that opens, set ‘RGB save type to ‘planar’.
    4. Change the file extension from .data to .raw in mac Finder or windows Explorer.

Now you can import this historical elevation map in Unity. In Unity, add a gameobject -> 3d object -> terrain to the project. In the inspector window, there’s a cogwheel. Click this; it opens the settings. One of the options will be ‘import raw’. Click this.

Select your .raw grayscale image.

  1. On the import dialogue, change it to 8-bit image rather than 16-bit.
  2. Change the width, height, x and z to all be 1025. Changed the y to be 75 (yours will be different; look at the range in your original map of highest and lowest point, and input that. For reference, please also see this post which saved me: http://newton64.github.io/blog/2013-07-24-gimp-unity-terrain.html

Ta da – a white glacial landscape with your elevation data.

Screen Shot 2015-06-09 at 12.14.30 PMNow the fun stuff can happen. But – before someone can ‘walk’ around your landscape, you have to add controls to your project. So, in Unity 3d, go to:

Assets – Import package – characters.

Once that’s all done, you’ll drag-and-drop a ‘FPSController’ into your project. You’ll find it as below:

Screen Shot 2015-06-09 at 7.26.52 PM

Click and grab that blue box and move it up into your project (just drop it in the main window). Make sure that the control is above (and also, not intersecting any part of) your landscape, or when you go to play, you’ll either be stuck or indeed falling to the centre of the earth. We don’t want that. Also, delete the ‘camera’ from the hierarchy; the fpscontroller has its own camera. My interface looks like this:

Screen Shot 2015-06-09 at 7.30.47 PM

You do the grass and trees etc from the terrain inspector, as in the window there on the right. I’ll play some more with that aspect, report back soonish. Notice the column drum in the right foreground, and the tombstone in the back? Those were made with 3d photogrammetry; both are hosted on Sketchfab, as it happens. Anyway, in Meshlab I converted from .obj to .dae, after having reduced the polygons with quadratic edge decimation, to make them a bit simpler. You can add such models to your landscape by dropping the folder into the ‘assets’ folder of your Unity project (via the mac Finder or windows explorer).  Then, as you did with the fpscontroller block, you drag them into your scene and reposition them as you want.

Here’s my version, pushed to webGL

Enjoy!

(by the way, it occurs to me that you could use that workflow to visualize damned near anything that can be mapped, not just geography. Convert the output of a topic model into a grayscale elevation map; take a network and add elevation points to match betweeness metrics…)

p3d.in for hosting your 3d scans

I’m playing with p3d.in to host some three dimensional models I’ve been making with 123D Catch. These are models that I have been using in conjunction with Junaio to create augmented reality pop-up books (and other things; more on that anon). Putting these 3d objects onto a webpage (or heaven forbid, a pdf) has been strangely much more complicated and time-consuming. P3d.in then serves a very useful purpose then!

Below are two models that I made using 123D catch. The first is the end of a log recovered from anaerobic conditions at the bottom of the Ottawa River (which is very, very deep in places). The Ottawa was used as a conduit for floating timber from its enormous watershed to markets in the US and the UK for nearly two hundred years. Millions of logs floated down annually…. so there’s a lot of money sitting down there. A local company, Log’s End, has been recovering these old growth logs and turning them into high-end wide plank flooring. They can’t use the ends of the logs as they are usually quite damaged, so my father picked some up and gave them to me, knowing my interest in all things stamped. This one carries an S within a V, which dates it to the time and timber limits of J.R. Booth I believe.

logend-edit2 (Click to view in 3D)

And here we have one of the models that my students made last year from the Mesoamerican materials conserved at the Canadian Museum of Civilization (soon-to-be-repurposed as the Museum of Canadian History; what will happen to these awkward materials that no longer fit the new mandate?)

mesoamerican (Click to view in 3D)

PS
Incidentally, I’ve now embedded these in a Neatline exhibition I am building:

3d manipulable objects in time and space

Mesoamerica in Gatineau: Augmented Reality Museum Catalogue Pop-Up Book

Would you like to take a look at the term project of my first year seminar course in digital antiquity at Carleton University? Now’s your chance!

Last winter, Terence Clark and Matt Betts, curators at the Museum of Civilization in Gatineau Quebec, saw on this blog that we were experimenting with 123D Catch (then called ‘Photofly’) to make volumetric models of objects from digital photographs. Terence and Matt were also experimenting with the same software. They invited us to the museum to select objects from the collection. The students were enchanted with materials from mesoamerica, and our term project was born: what if we used augmented reality to create a pop-up museum catalogue? The students researched the artefacts, designed and produced a catalogue, photographed artefacts, used 123D Catch to turn them into 3d models, Meshlab to clean the models up, and Junaio to do the augmentation. (I helped a bit on the augmentation. But now that I know, roughly, what I’m doing, I think I can teach the next round of students how to do this step for themselves, too).The hardest part was reducing the models to less than 750kb (per the Junaio specs) while retaining something of their visual complexity.

The results were stunning. We owe an enormous debt of gratitude to Drs. Clark and Betts, and the Museum of Civilization for this opportunity. Also, the folks at Junaio were always very quick to respond to cries for help, and we thank them for their patience!

Below, you’ll find the QR code to scan with Junaio, to load the augmentations into your phone. Then, scan the images to reveal the augmentation (you can just point your phone at the screen). Try to focus on a single image at a time.

Also, you may download the pdf of the book, and try it out. (Warning: large download).

Artefact images taken by Jenna & Tessa; courtesy of the Canadian Museum of Civilization

Virtual Worlds: and the most powerful graphics engine there is

Virtual worlds are not all about stunning immersive 3d graphics. No, to riff on the old Infocom advertisement, it’s your brain that matters most.  That’s right folks, the text adventure. Long time readers of this blog will know that I have experimented with this kind of immersive virtual world building for archaeological and historical purposes. But, with one thing and another, that all got put on a back shelf.

Today, I discover via Jeremiah McCall’s Historical Simulations / Serious Games in the Classroom site Interactive Fiction (text adventure) games about Viking Sagas – part of Christopher Fee’s English 401 course at Gettysburg College.

Yes, complete interactive fictions about various parts of the Viking world! (see the list below). I’m downloading these to my netbook to play on my next plane journey.

Now, interactive fiction can be quite complex, with interactions and artificial intelligence as compelling as anything generated in 3d – see the work of Emily Short. And while creating immersive 3d can be quite complex and costly in hardware/software, Inform 7 allows its generation quite easily (AND as a bonus teaches a lot about effective world building!)

Explore the Sites and Sagas of the Ancient and Medieval North Atlantic through one of Settings of The Secret of Otter’s Ransom IF Adventure Game:The earliest version of the Otter’s Ransom game was designed to be extremely simple, and to illustrate the pedagogical aims of the project as well as the ease of composing with Inform 7 software: In this iteration the game contains no graphics or links, utilizes very little in the way of software functions, tricks, or “bells and whistles,” and contains a number of rooms in each of sixteen different game settings; as the project progresses, more rooms, objects and situations will be added by the students and instructor of English 401, as well as appropriate “bells and whistles” and relevant links to pertinent multimedia objects from the Medieval North Atlantic project.

Using simple, plain English commands such as “go east,” “take spear-head,” “look at sign” and “open door” to navigate, the player may move through each game setting; moreover, as a by-product of playing the game successfully, a player concurrently may learn a great deal about a number of specific historical sites, as well as about such overarching themes as the history of Viking raids on monasteries, the character of several of the main Norse gods, and the volatile mix of paganism and Christianity in Viking Britain. The earliest form of the game is open-ended in each of the sixteen settings, but eventually the complete “meta-game” of The Secret of Otter’s Ransom will end when the player gathers the necessary magical knowledge to break an ancient curse, which concurrently will require that player to piece together enough historical and cultural information to pass an exit quiz.

Play all-text versions of the site games from The Secret of Otter’s Ransom using the Frotz game-playing software.

Play versions of the site games which include relevant images using the Windows Glulxe game-playing software.

In order to view images the player must “take” them, as in “take inscription;” very large images may come up as “[MORE]” which indicates that text will scroll off the screen when the image is displayed. Simply hit the return key once or twice and the image will be displayed.

We hope that you will enjoy engaging in adventure-style exploration of Viking sites and objects from the Ancient and Medieval North Atlantic!

Start by saving one of the following modules onto your desktop; next click the above game-playing software. When you try to open the Frotz software (you may have to click “Run” twice) your computer will ask you to select which game you’d like to play; simply select the module on your desktop to begin your adventure; you may have to search for “All Files.” Each game setting includes a short paragraph describing tips, traps, and techniques of playing:

Andreas Ragnarok Cross

Balladoole Ship Burial

Braaid Farmstead

Broch of Gurness

Brough of Birsay Settlement

Brussels Cross

Chesters Roman Fort

Cronk ny Merriu Fortlet

Cunningsburgh Quarry

Helgafell Settlement

Hvamm Settlement

Hadrian’s Wall

Jarlshof Settlement

Knock y Doonee Ship Burial

Laugar Hot Spring

Lindisfarne Priory

Maes Howe Chambered Cairn

Maughold – Go for a Wild Ride

Maughold- Look for the Sign of the Boar’s Head

Maughold – The Secret of the Otter Stone

Mousa Broch

Ring of Brodgar

Rushen Abbey Christian Lady

Ruthwell Cross

Shetland Magical Adventure

Skara Brae

Stones of Stenness

Sullom Voe Portage

Tap O’Noth Hillfort

Temple of Mithras at Carrawburgh

Ting Wall Holm Assembly Place

Tynwald Assembly Place

Yell Boat Burial

New Talent Tuesdays: 3dHistory & Steve Donlin

I’m pleased to announce a new occasional series here on Electric Archaeology: “New Talent Tuesdays”. I have been getting queries from grad students, talented amateurs, avocational archaeologists and historians, about the possibility of contributing to this blog. At first, I was reluctant… but then I thought, why? And no good reason presented itself. So, if I can help someone else join the conversation then that certainly fits the mission of this blog, and academe more generally! If you are interested in contributing, send me a note with a brief background, links to your work, and your ideal topic.

Without further ado, I am pleased to introduce Steve Donlin and his work on 3d representation. Steve is a graduate of University of Maryland with a Bachelor’s Degree in Ancient History.  He currently works outside of the field, but volunteers for numerous historical societies and blogs at 3dhistoryblog.com

Bringing History to Life through 3d Visualization

I graduated in 2007 from the University of Maryland with a History Degree.  Unimpressive GPA, but still over a 3.0.  I was happy with my path in college but I was a little afraid of the prospect of finding work.  I had tons of student loan debt already so I wasn’t thinking Grad School.  I took the first job I could find.  I began working at the Four Seasons Washington, DC doing Audio Visual work.  Setting up events and selling clients on all the wonders fancy AV could bring to their meetings-  not exactly groundbreaking research on Rome or Egypt!

After a year of working for the company,  I was introduced to a program that really has begun to change my life and my direction.  I began creating 3d models and renderings with Google Sketchup.  Sketchup is a free program distributed by Google that is a more user friendly version of AutoCAD.  Coupling that together with Kerkythea, a free open source rendering program, I began creating 3d renderings of our events and hotel space for our clients.  You can see some of my work here.  I am still learning and hope to increase my talents.  My career path had gone in a new direction.

One evening I was sitting around watching the History Channel and the old series Engineering an Empire came on.  It hit me; I have found a way to meld both parts of my life.  3D Visualization is a perfect way to bring Ancient History to life.  Not only are so many famous monuments from history destroyed or badly damaged; the ones we currently have are not even as impressive as they would have been in their day.  A great example is that Trajan’s Column originally would have been in lush color.  Check out a report here.  What a way to bring this back to life!

How did I get involved in Historical 3d Visualization?  Well I started reading articles about creating your own business, your own blog, or just simply starting your own project.  One great piece of advice was to start tweeting about what you are interested in.  So I created 3dhistory on Twitter.  I began to document all the work I was planning on trying or just some interesting posts that I found.  I am now up to 52 followers, not impressive, but hey they are real people!

About a month in something amazing happened.  I began tweeting about a company I read an article about in Archaeology Magazine: CyArk.  CyArk is a nonprofit, noncommercial project of the Kacyra Family Foundation located in Orinda, California.  CyArk’s mission statement is that they are

“[…]dedicated to the preservation of cultural heritage sites through the CyArk 3D Heritage Archive, an internet archive which is the repository for heritage site data developed through laser scanning, digital modeling, and other state-of-the-art spatial technologies.”

Pretty cool stuff I thought.  I mentioned them in a few posts and they sent me back a message saying they had a lot of very accurate laser scans of Pre-Columbian monuments that could be used to create 3d models.

This started a dialogue that ended up with me creating a 3d Model of their laser scan of Monte Alban, the original capital of the Zapotec Empire.  I was able to use their cloud point technology and Sketchup to recreate the largest building at Monte Alban for their website.  Check it out.  I currently am working on sites at Chichen Itza for them.

I give all my thanks to social networking and the urge to actually put in extra time and put myself out there.  I volunteered for a job I did not know I could finish.  I took a different approach and now I have a great contact in a bourgeoning field which interests me greatly.  I plan on continuing this work as much as I can.  If you are not currently involved in social networking you should be!  I was able to  get in contact easily with a company I read about in a magazine.  I do not know how that would have been possible years ago.

I hope to continue to show these projects more and promote the use of 3d Visualization in history.  Soon I will be launching 3dhistoryblog.com where I will document my work and the tireless work of others.  There is amazing stuff out there that truly can bring history to life.