I made this using Zbrush while paying close attention to creating something that resembled a CCTV Camera but my plan is to mount the camera on two legs, producing a mech like recon unmanned vehicle.

The 3D design is based upon some rough sketches I lay down from my head, most of the inspiration comes from MGS and the old Super 8 Camera. Diepod1_Planning

I made this using the concept art from daisukekazama on DA http://daisukekazama.deviantart.com/art/M720-Revolver-Design-Multiview-281568933 It's fully game ready and at a polycount of 6K which is fairly extensive but that is without any form of cleanup on the low poly mesh.

I Also made a variant with the textures and produced the one below. Texture Presentation SideView_M720_variant

Current Technical challenges for computer games

  • Physics and dynamics
  • Dynamic lighting
  • AI
  • Terrains, Landscapes and Level of detail

Physics and dynamics take up a huge part of everyday gaming but making the experience unique is where the task of the game developer becomes difficult typical in game physics utilise a physics engine that enables life like physical actions and reactions to take place.

Making an in game physics engine over the last couple of years has become mildly easy with the use of pre made codes that were made for a specific engine but then became outdated and the rights are now public.

One of the most common physics engines for games is the Physx engine by Nvidia, the way the Nvidia GPU works enables some of its multicore technology to be utilised to run real-time simulations, this not only is a huge leap for games but it is also a very hard technological leap, Video games supporting hardware acceleration by PhysX can be accelerated by a PhysX GeForce thus offloading physics calculations from the CPU, allowing it to perform other tasks instead. This typically results in a smoother gaming experience and additional visual effect.

Dynamics in games are highly similar to typical physics but are defined more towards the way objects react towards each other, for example if a wall to collapse in game should the rubble land on the floor or fall through it. The physics engine calculates the object should fall and the dynamic object has a rule stating it shouldn’t fall through the floor.

AI  (Artificial Intelligence)

AI in video games is based upon techniques to produce the illusion of intelligence in the behaviour of non-player characters (NPCs). The techniques used typically draw upon existing methods from the field of artificial intelligence (AI). However, the term game AI is often used to refer to a broad set of algorithms that also include techniques from control theory, robotics, computer graphics and computer science in general.

Seeing as gaming AI is based around creating the illusion of intelligence in many cases, the computer abilities must be toned down to give human players a sense of fairness. This, for example, is true in first-person shooter games, where NPCs’ otherwise perfect aiming would be beyond human skill.

A very common framework for AI in games is to have states of behaviour with every given interaction a player can have with the NPC, Below is the most common arrangement of this.

•Idle. In this state, the entity is passively standing around or walking along a set path. Perceptions are low. Player sounds are not often checked for. Only if this entity is attacked or “sees” a player directly in front of it will its state change to a higher level of awareness.

•Aware. This entity is actively searching for intruders. It checks often for the sounds of the player and sees farther and wider than an idle entity. This entity will move to the Intrigued state if it notices something out of place (something to check for), such as open doors, unconscious bodies, or spent bullet casings.

•Intrigued. This entity is aware that something is up. To demonstrate this behavior, the entity will abandon its normal post or path and move to areas of interest, such as the aforementioned open doors or bodies. If a player is seen, the entity goes to the Alert state.

•Alert. In this state, the entity has become aware of the player and will go through the actions of hunting down the player: moving into range of attack, alerting fellow guards, sounding alarms, and finding cover. When the entity is within range of the enemy, it switches to the Aggressive state.

•Aggressive. This is the state where the enemy has engaged in combat with the player. The entity attacks the player when it can and seeks cover between rounds of attack (based on attack cool-downs or reloading). The entity only leaves this state if the enemy is killed (return to normal), if the enemy moves out of firing range (go back to the Alert stage), or if the entity dies (go to the Dead state). If the entity becomes low on health, it may switch to the Fleeing state, depending on the courage of the specific entity.

•Fleeing. In this state, the entity tries to run from combat. Depending on the game, there may be a secondary goal of finding health or leaving the play area. When the entity finds health, it may return to the Alert state and resume combat. An entity that “leaves” is merely deleted.

•Dead. In some games, the state of death may not be completely idle. Death or dying can have the entity “cry out,” alerting nearby entities, or go into a knocked-out state, where it can later be revived by a medic (and returned to a state of Alert).

AI is a very technological challenge of gaming and can make or break a game, if the AI is too easy or doesn’t act in the similar way a human/creature does people may feel alienated or even humour us towards the in game functions of the AI.

Terrains, Landscapes and Level of detail

When creating any game in this day and age an environment needs to be made for the player to be incorporated into, creating realistic terrains and landscapes with a sense of scale is very demanding and challenging for modern day artists.

There are a few tools to help with the process but it’s usually a pipeline firstly you have the concept artist sketch out the artwork that would help the level designers produce the final 3D composite.

Once the concept art is done its given to the 3D designer to either use an in game editor to produce the maps or a separate 3D package.

One of the most common things that is done is an editor is pre made with sculpting tools already integrated into it, the artist then uses this to recreate the environment.

The level of detail within an environment is achieved with texture maps and lower mpoly meshes that are displayed at different distances., is a method for adding detail, surface texture (a bitmap or raster image), or colour to a computer-generated graphic or 3D model. A texture map is applied (mapped) to the surface of a shape or polygon. This process is akin to applying patterned paper to a plain white box. Every vertex in a polygon is assigned a texture coordinate (which in the 2d case is also known as a UV coordinate) either via explicit assignment or by procedural definition.


I did write almost 9 pages of information on how to go through every one of these steps but worpress deleted it, so here’s a gallery of my progress.

If you want to actually view the 3D model I also made the mesh in Unity, go here to see it 🙂

After looking into the artist Thomas Quinn and being inspired by his almost flawless typography I decided to take a shot at doing it myself, here’s my progress throughout the experiment.

The way I planned out how to achieve this was through the use of a projector, paint, illustrator and some creative drive. I first used illustrator to come up with a design that I would be best to use, I used typography but images can be used with this technique also (it just takes more time, effort and skill).


Screen shot 2013-02-28 at 10.08.37

Then I exported the image as a .png so I could use it later for the projection process.

Nice and transparent :)

Nice and transparent 🙂

I set up the projector so that it would project the image onto a stack of boxes arranged in a way that looked asymmetrical, the idea here is that you project the image at the height the user should be perceiving the final (for my experiment I placed the projector low so that the image could be seen without looking in an awkward angle but would seem quite distorted from above).


This step would probably be best to do this with the lights off as the projection can be seen clearer.

Once the projector is in place the image needs to be traced like a stencil.


Tracing the image proved difficult as the projection was blocked by myself due to hard to get places.


Another thing to note is that keeping lines straight when drawing from difficult stances is extremely hard.

Nearly done

Nearly done

When looking from above

When looking from above



A few things I should note about the project, I really did enjoy doing this experiment but I will say that amongst the fact it looks alright as an end product I would change a few things now that I have attempted anamorphic typography.
The first thing I would of changed about the way I accomplished this would be the way I projected the image, the projection was at an extreme angle but it was an observable angle (the image could still be made out because the projection was made with a high pitch angle but hardly any yaw) this resulted in an image that could be made out from the front view without having to stand (or sit) in the position intended to observe the artwork.

The second thing I would change would definitely be time, this experiment was elapsed over the space of 4 hours making almost no room for error.

In comparison to Thomas Quinn’s anamorphic art, my work really lacked the definition he used within his piece mainly due to the fact I had a very small amount of time but also due to the difficulty of painting over a projected image.

Other than those things I would say I’m really happy with the way this turned out, I will definitely try this again taking the problems I encountered into consideration for the next try.


Looking at the artwork above it would look almost normal at first glance but seemingly like it couldn’t exist within the 3D space it inhabits, the image below shows exactly how it was achieved by using an artistic technique known as anamorphosis. Within Anamorphosis there are two main types that exist, perspective and mirror. Perspective anamorphic images were first seen back during the Renaissance (15th Century) as for mirror anamorphic images they were first seen around the early 16th century.

Thomas Quinn uses perspective Anamorphosis to make simple typography almost magically appear out of thin air, looking at the image from a different angle you can clearly see how the typography was painted onto the walls in a way that displays the final image but only from a specific angle.



Thomas Quinn is a graphic designer living in Chicago Illinois and also the principal designer at Blank Is The New Black a creative studio that keep’s an eye on new trends in design, but always choose’s smart design over fashionable design. The designer uses many different types of art to create illustrations and typo-graphics in simplistic yet smart ways.

After looking into some of his artworks I’ve become inspired to make some anamorphic typography similar to the style of Thomas Quinn.