Very interesting use of what I’ll call psuedo fluid dynamics. It would appear that by a clever use of texture mapping and scrolling and stretching/warping UV coordinates Ubisoft have managed to create the illusion of Fluid Dynamics. The ability the game has to sculpt the landscape and see the water flow, will certainly provide a level of playability to it. The novelty might wear off after a while, but this does appear to be a novel technique (the game so far is two years in the making).
This technique could well apply to Climate Change scenerios or flood simulation.
It also makes use of volumetric clouds and different physical properties for the ground such as the difference between rock and soil. The soil spreads, which is akin to a melting function, or blurring function on the terrain displacement map.
This game’s claim is to the have the most sophisticated weather system in a driving game. The video shows the kind of effects they have. You’ll see rain splashes on the ground, and on the computer screen (camera), fog, spray, rain and clouds.
The idea of splashes onto the screen is particularly interesting. There will be another post soon, featuring a stop frame animation of clouds as seen through a window, particularly a ‘picture window’ due to the fact of it having a landscape view. The 24 hour stop-frame includes clouds and rain that passes. The rain splashes on the window and then dries out.
The image below is a crop from NASA’s 250m resolution TERRA satellite – 2003/163 – 06/12. The crop shows a 40km sample over Somerset, UK (160 x 160px). I’ve already been able to extract the cloud layer from a similar image and converted it two a layered cloud image. The challenges ahead are to identify cloud forms, and shapes, add particle clouds where the actual clouds are that form the height of the cloud. Ensure the particle clouds are the correct height and shape.
This technique has been discussed by Dobashi (et al) – “Modeling of Clouds from Satellite Images Using Metaballs”. Note that they used metaballs (soft geometrical shapes) to create simulations of whole cloud systems. This technique may not be practical for realtime visualisation if it requires hundreds of metaballs. I will conduct further investigation to see whether this technique is feasible. It might be over complicated for the result I need. Just analysing where the balls are and converting them to particles may be sufficient.
The terrain data is a crucial element, as that provides the recognisable element within the simulation. Resolution is the core element to terrain data. Too low a resolution will lead to a landscape that is only recognisable from a considerable distance away, i.e. from the sky. If this distance is too high then the view will not represent a view that corresponds to the reality of the user’s experience of the location (or potential experience if they have never visited the location), i.e. it is not true first person. As part of the premise of the project is that it represents one’s own experience of weather (or potential experience), then too much deviation from this aim would lead to problems with the project.
Currently, I have terrain data that has a 2m resolution. This leads to a fairly accurate view from around 100m’s away, although, there is a rounding off of buildings. For objects closer than this, then the rounding is fairly obvious. So the question this leaves is:
Is a higher resolution terrain data needed, for sub 100m views, or will 2m resolution data suffice?
The benefits of first-person weather visualisation depend on a variety of factors.
Definition of First Person 3D Visualisation
Perspective views in computer graphics, and in particular video games, is referred to as first-person, or 3rd person. This is an extension of Narrative Mode used in writing. First person view is where the virtual camera represents the eye’s of the player. His or her point of view as the character in the game, as in Doom.
3rd person perspective is the use of a virtual camera to show the character (as in Mario games, Grand Theft Auto, Tomb Raider).
Many computer games allow the switching of views between first and third person.
Typical virtual reality systems have adopted a first person view. Those using headsets have often used stereo screens to provide a stereoscopic depth view.
The benefits of first person perspective is that the view is not about looking at a character, but is actually about your own experience. I would argue that there is an increased sense of immersion with the First Person view. This perspective is more likely to trigger memories and imprint on your memory.
Achieving this would be a step towards the believability issue of the overall project.
Here’s my initial attempt at combining some of the Frome dataset into a scene. Utilising two terrains (a high & low quality), as an attempt at improving rendering speed and cutting down on texture size.
The screen shots are from a real-time image. Showing two street level and 10m above street level shots. Both views are on roads which are on a hill.
The sky is a simple skydome, not using any data other than a panoramic image. There is a small amount of fogging to indicate atmospheric perspective.
I now understand an issue with converting the height data, from the 24bit files supplied, and indeed previous data that I’ve used. The roads appear bumpy. This is a terracing-like effect that occurs when the images are converted to 8bit RAW files. I will hopefully be able to correct this, keeping 16bit greyscale textures and not getting errors (such as large anomolies, such as jagged points).
It is possible to see two types of shadows in this scene. The dark shadows are from the aerial photography. I’ve pointing a light source with realtime soft shadows to approximately match the angle of the such when the photographic image was taken. Thus there is a double shadow. This shadowing will be an issue when it comes to showing images at different times of the day, with the sun or cloud cover giving different lighting conditions. However, it helps to see the problem to be able to deal with correcting it.
Following a meeting with Chris Mewse of GetMapping, I am examining some data supplied by them for the purpose of creating a prototype weather system. The data is of two regions, my home town of Frome, Somerset, UK and Mount Snowdon (Yr Wyddfa), Wales. Frome
The Frome data (12.5cm resolution photography and 2m resolution terrain) will enable me to make instant visual observations and photographic and video records regularly. Snowdon
The Snowdon data (25cm resolution photography and 2m resolution terrain), will provide me with an extreme comparison being one of the highest mountains in Britain, and within travelling distance. Live feed webcams are available for instant observations from the First Hydro website http://www.fhc.co.uk/cams.htm.
I’ve been experimenting with a technique to extract height data of clouds from satellite images. The screen shots below show my first attempts. The data is 250m resolution colour satellite image (need to provide link).
I’ve been able to extract the clouds and apply them to a landscape (currently not the same terrain that features in the satellite image). I’ve used several layers to which I’ve applied the height textures, and used a cutout to provide shadows. This has enabled me to show the heights of the clouds.
The clouds have then been animated by sampling different areas of the satellite image. Although this far from an effective method of animating clouds currently, with further investigation and linking it with wind direction data, it could prove to be a particularly useful method. The method could be used to tween the frames between satellite key frames.
The images show cut-out textures (view from above), and alpha blended textures (view from below). The cutout textures resemble the images seen on TV weather broadcasts, which presumably must use similar slicing techniques to extract and apply a satellite image to the birdseye view of the ground.
The potential of the height element of slicing will be in mixing it with particles to create particle clouds using height data. Combining these techniques, e.g. Harris clouds and sliced clouds, will be my next task.
A promising and probably the most accurate understanding of cloud states (on a global scale) is the studying and extraction of cloud formations from satellite images. Satellites such as EUMETSAT are regularly recording the earth and publishing images to the web, every hour, half hour, or even every 15 minutes.
Currently, I’m yet to find sufficiently high enough resolution data published in this method, that will allow me to extract clouds from the images. Typical resolutions are 2km, 1km. The best images I’ve found so far were of a recording of 250m resolution, visible data.
Infrared data shows the temperatures of the clouds and hence gives an indication as to the elevation of the cloud. 3D data can be gained from this and potentially applied to particle clouds or 2D rendered clouds.
I’ve been gathering papers on realtime weather simulation, and one of the best attempts at cloud simulation appears to be that of Mark Harris’s PhD Thesis and further work from 2002 onwards. Harris, M Realtime Cloud Rendering for Games.
His method involves understanding the thermodynamics of clouds as well as utilising methods to realistically render them, using such issues as scattering and absorption of light. He also describes using 2D imposters to speed up the rendering. However, general home computing power may have improved enough since 2002 to enable a mix of full-particle based clouds, with the use of imposters only for distant clouds, or indeed, the rendering of a skybox.
For a while now I’ve been using water in my Light Years Coast work. Initially, this was using simple reflective water. I then developed a technique that used a reflective plane (daylight water), with a refractive plane on top, the mesh of which was animated with sine waves.
Since early in this year, I have been following and mildly adapting the Tessendorf water simulation that has been created by the Unity3D community. This has enabled real-time believable rendering of ocean water, at varying scales. Recent additions to this have also included underwater rendering.
Having previously used domes as a means to display distant sky, I am now investigating the use of blended skyboxes. Skyboxes allow the use of up to six textures to create a seamless sky. However, there usual limitations are that they don’t animate. They are low render cost, high quality images that display a still sky.
I am investigating methods to use skyboxes with animation, either as blending of pre-rendered textures or ideally, as realtime rendering based on satellite images.
Use spheres (or bell shapes), to render light on clouds.
1) Z buffer slice image and place particle systems
2) render each individual sphere as a particle mesh, with lighting dictated by lighting on spheres.
Need to know the shapes of clouds.
3) use coupled map lattices to simulate the thermodynamic flow of air and conversion to moisture.
Allows dynamically correct movement of clouds. Needs to know precise data to allow for correct display of weather.
4) grab shading layers from infrared satellite images. convert to shapes.
If knowledge of cloud height is known (by getting cloud temperature values). Then cloud layers can be separated. A cumulonimbus cloud would have a large range of temperature from low warm to high cold.
Alternatively, slice cloud heights depending on angle of view. Display cloud heights with knowledge of height of condensation layer.
Display cloud slices on a series of parallel textures perpendicular to the camera’s viewpoint.