Carnegie Mellon Libraries are introducing some games to help students “develop research skills through
entertaining and easy-to-repeat activities. At this stage, we are testing each game to work through any technical glitches and prepare the games for a final version.
Please feel free to send us your comments & suggestions on ways that we can further develop the games.”
It’s been a virtual morning. Just participated in the VastPark stress test. VastPark is a nascent virtual worlds platform – according to their material,
VastPark is an end-to-end solution for creating, deploying and distributing virtual experiences on the web. It’s composed of a new breed of applications, designed to make creating these experiences simpler and faster, with a more immersive result. We call this the era of the virtual web.
Powered by open specifications
VastPark is powered by some exciting new specifications that have been developed to fulfill two of the layers in this virtual web; MetaWSS for content distribution and IMML for content presentation
Read more about how VastPark is working to standardise the virtual web.
So I downloaded the alpha browser, and logged in.
The point of view was first person, mouse controls and keyboard controls for moving around. There’s a chat window in the right… felt a bit like one of those VRML type sites. I couldn’t connect using my poor old Toshiba laptop, but my desktop graphics card was up to the task, and connection was achieved in 5 seconds, throwing me in-world. The first world worked fine enough, second one I tried caused it to freeze up (but that may have been because the test ended right about the same time). Now, I realise that this is the alpha build, and that this was a stress test, so one shouldn’t expect too much, yet (I was annoyed the way the browser grabbed my mouse, and wouldn’t let me leave the world pane of the browser. Turns out you have to hit the secret key to get it to release). They’ve already made all of their tools available, even at this early stage, so that’s something to be commended! Check ‘em out, see what you think. Full features list here.
Just had an interesting conversation with Joe Rigby, of MellaniuM Design
He was showing me a plugin that they’ve developed for exporting AutoCad models into the Unreal2 engine, and then scaling the textures back onto the model (usually, one would use something like 3d Studio Max or Maya to import models into Unreal2). From an archaeological point of view, archaeologists have been using AutoCad for years to create reconstructions of sites. To get those models into a world engine usually’d involve all sorts of translations, but if you could import directly from your existing archaeological AutoCad model…. you’d suddenly be able to experience the space that you’ve recreated. A 3d picture is still just a picture. Experiencing the space makes – as it were – a world of difference. Read Diane Favro or Kevin Lynch for a start on the importance of experiencing space.
In the demo Joe showed me, he walked his avatar around several architectural reconstructions (houses, etc), into a large art gallery / museum (pictures on the wall never pixellated, which was nice), and by their reconstruction of the Titanic. All the textures were very photorealistic, at least as good if not better than anything I’ve seen in SL. This being Unreal2, he had to turn off the weapons, etc, but he did show a novel use of the sniper-scope feature, zooming in on the detail of his model. Unreal2 brings people into the world via a peer-to-peer system, so allowing at least 30 odd if not more people to experience the same space at once: certainly enough for that class trip!
Joe’s interested to hear from any archaeologists who’re interested in exploring this technology, perhaps for some joint projects. I’d send him what I had, just to see what would happen, except I don’t have any AutoCad models lying about!