“The SLSI NUI Interaction Scripts is a collection of Python addons to support the creation of sign language animations in Blender using Natural User Interfaces, such as the Microsoft Kinect and the Leap. It also support direct facial control of MakeHuman characters via FaceShift.”
I’m really proud to announce that I’ve been invited to conduct a seminar about the demoscene.
The seminar, entitled Graffiti Elettronici (Electronic Graffiti) will be held next Monday (22 April 2013 @ 17:00) at the CIRMA department (Center Interdipartimental for Research on Media and Audiovideo), University of Turin, Italy.
Non-interactive multimedia applications made of special effects, electronic music, deadlines, last-minute fixes and a mission: squeeze the hardware to astonish the viewer. […]
Introductory journey to the world of the “scene” and its parties, kick-starters for the best multimedia software developers in the world. […]
The seminar will present a brief history of the demo scene, form its origins in the ’80 (on 8-bit systems) until today, where demos are made for PC, consoles as well as for the new web technologies. […]
The tools used in production and some programming techniques will be surveyed […]
A jump in the old good times of demoing and partying. I think it was more or less 20 years I was not attending a party! I was curious to see how the atmosphere was, smell of old burned cables, people sleeping on keyboards, extreme coding at work, and a lot of fun
Nice to see that Amiga and C64 are still in the competition!
I’ve been shooting few pics and videos, just to give you the idea of the mood.
Game compo. A cool team developed, in 12 hours, a Wii game where – totally drunk – you use your… ehmm… WiiMote, to piss into a toilet. The more you advance in the levels, the more you get drunk, making it reeeeally difficult to score
The next video is an example of the introduction to demo competition sections. Really cool!
However, I have a comment for the organisers. I’ve been asked to pay the full price (60 Euro) to attend a single evening. No reduced nor partial tickets were planned.
I agree that this is a fair price to attend the whole party: you get tables, chairs, power supply, Wi-Fi, toilets and showers. But this policy simply, completely, cut away the participation of non-sceners.
I would have loved to see “civilians” sitting on the first row to attend the demo compos. People that are normally spending an evening at the cinema or theatre, might love to see some digital art at work. There are many video artists around that would really enjoy the show. The day after I was not able to attend the party, but I would have never suggested friends to go to see the competition at that price.
I would love to see demo parties extended to “passive” attenders, at least during the show time, favouring an exchange of ideas between demo-addicted and art-lovers in general.
… and you might even have the chance to see more women around
There are plenty of examples, among them also a full-featured 3D FPS: BananaBread.
C++, which was stuck to its 2003 standard, ran through an update in 2011 [ C++11]. It now presents a more simplified syntax to handle containers and plenty of new built-in features. Will it be able to keep the pace of high-level non-native interpreted languages?
At the same time many real-time 3D software developers (including me) are arguing it’s not enough to develop games. Is that true?
However, HTML5 seems today the only way of writing only once an application that you might want to deploy on many different platforms.
It reminded me when I had the chance to try one of these mind-reader headsets. It was a headset with an additional protuberance, much like a microphone, to stick on your front. Sorry, I don’t remember the name of the company, and I even don’t know if they had some success. I’ll update the post once I find out the contact again.
But, what I clearly remember is how impressed, and let me say, a bit scared, I’ve been using the mind reader.
Let me tell you the story. The headset was presented as able to read the level of “concentration” and “relaxation” of the user. The operator was bringing my avatar around a in a virtual word, asking me to perform tasks like: “concentrate to lift up that box” and ”relax to put that car on fire”.
While trying to concentrate to make the box fly, I was distracted by a bottom bar showing the level of concentration. I mean, while telling me that I was succeeding in concentrating, it was in fact distracting me from the task! And the box kept on falling down several times.
Apart from the critics on the superficial GUI design, what heavily impressed me is that I could not “get away” from the game. By getting away I mean detaching from the immersion in order to reason on a strategy with some calm. With traditions joypads you can raise up your body, relax your hands, exit form the screen and think how to perform next steps in the game. The game engine would not know that you lost your immersion.
With a head mounted brainwave readers you can’t afford it. In the moment you loose your immersion, hence you loose your concentration, the game engine would know it and can react accordingly. Isn’t it the best moment, dear AI, to slash the player with your sword?
Anyway, I’m dreaming of an RPG where wizards must “really” concentrate to cast spells