Video Engine Visual Effects

From Hero of Allacrost Wiki
Jump to: navigation, search

Particle Effects[edit]

Overview of particle effects

Particle effects can be used to create things like rain, fire, snow, and various other effects.

Here is a diagram showing the basic structure of an effect. As you can see, an effect contains one or more "systems", and each system contains exactly one "emitter".

Video engine effect structure.png

Note that effects are nothing more than a container for a list of systems. For the most part, most effects are just a single system. However, multiple systems per effect are supported for something like a campfire for example, where you might have the fire itself, some smoke, and glowing embers rising from the fire.

Basic steps to use particle effects

There are 4 steps you need to do to add effects into your code:

1. Create a particle manager First, obtain a pointer to a new ParticleManager, by calling ParticleManagerFactory::NewParticleManager(). Note that each game mode should have its own particle manager factory. This way, if you're in battle mode, you won't be updating/drawing effects from map mode.

2. Add the effects to the manager If you want to create an explosion at point (x,y) on the screen, instead of creating a ParticleEffect yourself and then managing it, you just tell the manager where to place the effect. The function call looks like this: AddEffect("explosion.lua", x, y).

3. Call update/draw each frame Pretty self explanatory :) When you call ParticleManager::Draw(), it will draw all effects that you've added.

4. Destroy the particle manager when you are done with it Call ParticleManager::Destroy()

Here is a code sample showing all of these things:

// in the game mode init
_particle_manager = ParticleManagerFactory::NewParticleManager();

// when Claudius gets hit by a bomb
ParticleEffectID effect_id = _particle_manager->AddEffect("explosion.lua", claudius_x, claudius_y);

// update/draw

// on game mode exit

Notice that AddEffect() returns a ParticleEffectID. This basically gives you an ID that you can convert to a ParticleEffect * pointer at any time, if you want access to the effect directly. The reason an ID is given instead of a pointer is because there's no guarantee the effect will be alive 5 seconds later-- if the effect's lifetime expires, it is removed from the manager. In the next section, you'll see some cases where you want to manipulate the effect directly.

Stopping effects

You can quickly stop all effects currently being displayed by calling ParticleManager::StopAll(). If you want to stop a specific effect, then you have to get a pointer to the effect using its ParticleEffectID, and call its Stop() function, like this:

ParticleEffect *effect = _particle_manager.GetEffect(effect_id);

By default, the stop function is very polite... Instead of killing off the effect immediately, it will simply disable the effect from spawning new particles, and then patiently wait for all of them to fizzle out before killing off the effect. However, if you do need to kill an effect instantly, you can pass true to the stop function. (This works for either Stop() or StopAll()).

Moving effects

When you call AddEffect(), you pass a position for the effect. But, what if you want the effect to move around? (e.g. a fireball attached to a missile) In that case, you can get a pointer to the effect and call Move() or MoveRelative() on the effect. For example:

effect->Move(fireball.x_velocity * frame_time, fireball.y_velocity * frame_time);

Rotating effects

Some effects, like an explosion, are omnidirectional. Others, like a jet of smoke, are oriented in a particular direction. You can change the orientation of an effect by calling the SetOrientation() function. The angle here follows normal trig convention: 0.0f = right, PI / 2 = up, PI = left, 3 * PI / 2 = down.


Setting attractor points

There is a property for particles called radial acceleration, which is just a fancy way of saying that particles accelerate toward or away from the emitter. However, if you want particles to be attracted/repelled from some other point other than the emitter, you can do this by setting an "attractor point". Currently, the system only has support for one attractor point at a time (performance/complexity considerations). Changing the attractor point only has an effect if radial acceleration is enabled.

effect->SetAttractorPoint(x, y);

Particle system properties

Here is a list of properties particle systems can have. Properties which specifically pertain to the emitter are prefixed with "Emitter." Keyframe information is prefixed by "Keyframe." System information is prefixed with "System.":

Emitter.x, Emitter.y Position of the emitter, relative to the position of the effect. Most of the time, this is (0, 0)

Emitter.x2, Emitter.y2 This is used to define the second corner of the emitter if the emitter shape is a rectangle.

Emitter.center_x, Emitter.center_y Center point of the emitter. This is usually just the same as Emitter.x and Emitter.y, unless the emitter is shaped as a rectangle or line, in which case it's the midpoint.

Emitter.x_variation, Emitter.y_variation Very useful for adding some random variation to the spawn position of each particle, makes it look more natural.

Emitter.radius This is the radius of the emitter, if it is circle shaped.

Emitter.shape The shape can take on the following values:

  • "Point": particles are emitted from (x,y)
  • "Line": particles are emitted randomly from line along (x,y) to (x2, y2)
  • "Circle outline": particles are emitted along the outline of the circle at (x,y) with radius "radius"
  • "Circle": particles are emitted anywhere within the circle at (x,y) with radius "radius"
  • "Rectangle": particles are emitted anywhere in the rectangle from (x,y) to (x2, y2)

Emitter.omnidirectional If set to "1", then points can have any initial direction.

Emitter.orientation If it is Not omnidirectional, then this tells the initial angle for particles. (0=right, PI/2=up, PI=left, 3*PI/2=down)

Emitter.outer_cone This angle controls the orientation variation of the particles. If outer cone is 90, then it means the direction of the particles is the orientation, plus or minus 45 degrees.

Emitter.inner_cone This should be an angle equal to or smaller than outer cone. The angles within the inner cone will have the highest concentration of particle emissions, and the angles which are contained in outer cone, but not in inner cone will have a falloff in emission rate.

Emitter.initial_speed Initial speed of emitted particles

Emitter.initial_speed_variation Variation added to the initial speed

Emitter.emission_rate Particles to emit per second

Emitter.start_time How many seconds after the effect starts until the emitter should start emitting. This is useful if you have several systems within one effect, but you want one system to start after the others.

Emitter.emitter_mode Can be one of the following values:

  • "looping": the emitter will emit a constant flow of particles at the emission rate
  • "one shot": the emitter will pump out particles until system_lifetime
  • "burst": emits all particles at once and then stops emitting
  • "always": emits particles any time the number of particles is less than the max, regardless of emission rate

Emitter.spin Can be one of the following values:

  • "random": each particle will spin either clockwise or counterclockwise, with 50/50 chance
  • "clockwise": all particles will spin clockwise
  • "counterclockwise": all particles will spin counterclockwise

Note that even if the spin is clockwise, a particle can spin counterclockwise if its angle is negative!

Keyframe.size_x, Keyframe.size_y X and Y scaling factors for the particle at this keyframe

Keyframe.size_variation_x, Keyframe.size_variation_y X and Y scaling variations for the particle at this keyframe

Keyframe.rotational_speed Rotational speed (degrees / second) at this keyframe

Keyframe.rotation_speed_variation Rotation speed variation for this keyframe

Keyframe.color Color of the particle at this keyframe

Keyframe.color_variation Variation in color components for this keyframe

Keyframe.time Time of this keyframe. Can be from 0.0 to 1.0

System.animation_frames List of image filenames. If the particles are non-animated, then this is just a list of 1 element.

System.animation_frame_times List of times, how long to show each frame, in units of 1/30 of a second. If all frames are same length, this is a list of 1 element.

System.enabled If zero, this system won't be drawn. This is for testing purposes- if you want to quickly disable a system without actually removing it completely.

System.blend_mode Can be one of the following values:

  • "none": no blending
  • "blend": regular alpha blending
  • "additive": additive

System.system_lifetime How long the system can live for before expiring. (in seconds) ONLY APPLIES TO ONE_SHOT MODE.

System.particle_lifetime How long each particle lives for before dying. (in seconds)

System.particle_lifetime_variation Variation added to particle lifetimes to make system look more natural

System.max_particles Maximum number of particles that can be alive at any time

System.damping Value that is multiplied by particle's velocity each second. For example if damping is 0.7, then particles lose 30% of their velocity each second.

System.damping_variation Variation added to the damping

System.acceleration_x, System.acceleration_y Constant acceleration that is applied to all particles. Good for modeling gravity for example.

System.acceleration_variation_x, System.acceleration_variation_y Variation in the acceleration.

System.wind_velocity_x, System.wind_velocity_y Constant velocity on all particles. Most commonly used for wind.

System.wind_velocity_variation_x, System.wind_velocity_variation_y Variation on wind velocity.

System.wave_motion_used Set to '1' if wave motion is being used.

System.wave_length Wavelength of the sinusoidal wave (distance from peak to peak), in pixel units.

System.wave_length_variation Variation in wave length.

System.wave_amplitude Amplitude of the wave, in pixels.

System.wave_amplitude_variation Variation in wave amplitude.

System.tangential_acceleration Acceleration in direction perpendicular from the direction from the attractor to the particle. (If SetAttractor() hasn't been called, then the attractor = the emitter).

System.tangential_acceleration_variation Variation in tangential acceleration

System.radial_acceleration Acceleration towards or away from the attractor point. (If SetAttractor() hasn't been called, then the attractor = the emitter).

System.radial_acceleration_variation Variation in radial acceleration.

System.user_defined_attractor Set to '1' if a user defined attractor is allowed. If this is zero, then the attractor will always be the emitter, even if SetAttractor() is called.

System.attractor_falloff Scales the falloff in attraction force as point gets further from the attractor. The magnitude of attraction is calculated as 1.0f - attractor_falloff * distance, where distance is the distance in pixel units between the particle and the attractor point. If attractor falloff is 0.001 for example, then every 100 pixels away from the attractor is a 10% decrease in the strength of the attraction.

System.rotation_used Set to '1' if particles are allowed to rotate, otherwise zero.

System.rotate_to_velocity If '1', then particles are rotated to align with their velocity. So for example if a particle system sends out bullets flying in various directions, they will all face the direction they're going.

System.speed_scale_used If rotate to velocity is '1', And speed scaling is '1', then particles can be scaled according to their speed. This is a crude form of motion blurring by stretching faster-moving particles.

System.speed_scale If rotate to velocity and speed scale used are both '1', then this allows particles to be stretched by a factor of speed * speed_scale.

System.min_speed_scale This sets a minimum cap on the scaling due to speed scale, to prevent particles from being scaled into nothingness if they are too slow :)

System.max_speed_scale This sets a maximum cap on the scaling due to speed scale, so particles don't become ridiculously long

System.smooth_animation If '1', smooth animation is used. This is used if you have animated particles, and you want the transitions between the animation frames to be crossfaded. For the most part this feature should be avoided since it's very expensive.

System.modify_stencil Set this to '1' to have the particle system draw to the stencil buffer. Systems that use the modify_stencil parameter will NOT be displayed visibly on the screen.

System.stencil_op Can be one of the following values:

  • "incr": increases stencil value
  • "decr": decreases stencil value
  • "zero": sets stencil value to zero
  • "one": sets stencil value to one

System.use_stencil Set this to '1' to have the system only draw particles on spots where the stencil buffer is equal to 1.

System.scene_lighting Value from 0.0 to 1.0, which tells how much to weight the color of each particle by the scene lighting. Generally can be zero.


Set this to '1' if initial angle of particles should be randomized (0 to 2*PI) as opposed to initially being zero. This is generally a good idea for any particles, except those which have to be aligned a certain way, either because they always are oriented "right side up" (as opposed to up side down), or because they use rotate to velocity.

Particle effect files

At the moment, we don't have a particle effect editor. (And maybe never will because there's so many parameters to build GUI for, but we'll see... someday). So, if you want to create your own effects you have to write a .lua file for it manually. Here is an example of such a file..

-- Test particle effect
-- Last modified on November 10th, 2005
systems = {}
systems[1] =
    emitter =
    keyframes =
        { -- keyframe 1
        { -- keyframe 2
            color_variation={.1,.3, .3,0},
    animation_frames =
    animation_frame_times =
    enabled = 1,
    blend_mode = 13,
    system_lifetime = 3,
    particle_lifetime = 0.4,
    particle_lifetime_variation = 0.00,
    max_particles = 150,
    damping = 1,
    damping_variation = 0,
    acceleration_x = 0,
    acceleration_y = 2530,
    acceleration_variation_x = 0,
    acceleration_variation_y = 0,
    wind_velocity_x = 0,
    wind_velocity_y = 0,
    wind_velocity_variation_x = 0,
    wind_velocity_variation_y = 0,
    wave_motion_used = 1,
    wave_length = .5,
    wave_length_variation = 0,
    wave_amplitude = 0,
    wave_amplitude_variation = 0,
    tangential_acceleration = 7880,
    tangential_acceleration_variation = 0,
    radial_acceleration = 0,
    radial_acceleration_variation = 0,
    user_defined_attractor = 0,
    attractor_falloff = 0,
    rotation_used = 1,
    rotate_to_velocity = 1,
    speed_scale_used = 1,
    speed_scale = 0.005,
    min_speed_scale = 1.0,
    max_speed_scale = 20.0,
    smooth_animation = 0,
    modify_stencil = 0,
    stencil_op = 'INCR',
    use_stencil = 0,
    scene_lighting = 0.0,
    random_initial_angle = 0.0

Distortion Meshes[edit]

Overview of distortion meshes

Distortion meshes are used to do "image warping" effects. The basic idea is that you create a uniform grid over an image, then add a small displacement to each node on the grid. Here is a picture below that clearly shows what this is all about:

Video engine distortion example.png

A distortion mesh is defined by several parameters:

  1. An image: a StillImage that you want to distort
  2. A distortion function: defines how vertices warp as a function of time
  3. A grid: specified by # of rows and columns
  4. An active rectangle: specifies which portion of the grid you want to warp

Note that the distortion function can affect the colors, in addition to the positions of the grid points.

What if you want to warp the entire screen, or a portion of the screen? In that case, you have to create a capture of the screen using GameVideo::CaptureScreen(), and then use that as the image in your distortion mesh. Then, each frame, the steps are:

Draw the scene normally Capture screen Draw the distortion mesh

Whenever possible, try to draw a warped image, instead of trying to warp what's already been drawn on the screen, because clearly this is more expensive.

Finally, one interesting

effect that can be done using distortion meshes is a technique called "flow maps". The basic idea is, you capture the screen and then draw it warped. Then you capture the *warped* screen, and draw that warped... And so on. Also, each time you draw the screen, you draw it with some blending (not 100% opacity). The result is that the your distortion mesh now functions like a vector field, telling which direction the pixels will "flow" at each point on the image.

This effect is used a lot in visualizations/plugins for WinAmp, Windows Media Player, etc. As for how it could be used on Allacrost, that's a bit trickier ;) One possible use is for battle transitions. If I recall correctly, Final Fantasy 7 had something like that when a battle starts. Another possible usage is for battle effects and spells. It might be interesting to try the flow map technique on a particular spell for example, rather than the entire screen.

Below is an example of the "flow map" technique. It is just drawing a spinning cube each frame:

Video engine flowmap example.png

How to use distortion meshes

// Initialization
DistortionMesh mesh;
mesh.Initialize(image, VIDEO_DISTORTION_TYPE_BLAH, num_columns, num_rows);
mesh.SetActiveRect(Rect(0, 1, 0, 1));

// Every frame
VideoManager->Move(x, y);

// Cleanup


  • The 2nd parameter of Initialize() is the distortion function. Check the top of distortion_mesh.h to see what functions are available.
  • SetActiveRect() allows you to make it so only a sub-region of the grid is distorted. Coordinates go from 0.0f to 1.0f.
  • Note that the distortion mesh is affected by transformation and draw flags, just like images.

An example

There's not a lot of functions in the DistortionMesh class, but it's still maybe a little tricky to use, so here is a concrete example. Suppose we want to create a heat haze effect, like the one in Final Fantasy 6 (shown below). In this effect, the mountains in the distance have a ripple distortion going through them.

Ff6 distortion.png

So, let's break down the steps we need to do:

  1. Ask Raj to write a distortion function :)This is the easy part, for you at least!
  2. Figure out what grid spacing to useTry to enough points that the effect looks good, but no more than you have to. For the above image, 16x16 points ought to work well.
  3. Set the active rectangle to only affect the mountains. (not the ground or the sky) Looking at the above image, the bottom of the mountains are, say, 65% up from the bottom of the screen, and the top of the mountains are maybe 80% up from the bottom of the screen. So, we do:

mesh.SetActiveRect(Rect(0.01f, 0.99f, 0.65f, 0.80f)); // (left, right, bottom, top)

Note that we want the left and right edges of the screen to stay fixed. Why? Well, imagine what would happen if they moved around... You might have a situation where the screen isn't fully covered by the background image, so you can see whatever junk is behind the background image. This isn't too bad, because probably all that's behind the background image is the color black, but it still looks a little bit weird. So, instead of using (0.0f, 1.0f) as our left-right bounds, we use (0.01f, 0.99f).

Writing your own distortion function

For the most part, you don't need to write distortion functions- I'll write them. The only reason I'm writing this section is so you know enough to be able to modify existing effects if you don't like them :)

The distortion functions are implemented as classes derived from DistortionMethod. Here's an example:

class WaveDistortionMethod : public DistortionMethod
    WaveDistortionMethod(DistortionGrid *grid);
    void Update(float frame_time);

WaveDistortionMethod::WaveDistortionMethod(DistortionGrid *grid)
: DistortionMethod(grid)  // call base class constructor!

void WaveDistortionMethod::Update(float frame_time)
    // call DistortionMethod::Update() always!
    // Implement your distortion function here:
    for(int32 r =; r > - _active_rect.height; --r)
        for(int32 c = _active_rect.left; c < _active_rect.left + _active_rect.width; ++c)
            grid->nodes[c + r * _grid->cols].y += 0.03f * sinf((_time + c*100 + r*50) / 100);

So ignoring the ugliness of this code, at least hopefully this gives you the gist of how this works :)

Now, go to the top of distortion_mesh.h, and add a value for your new distortion method in the enum, for example VIDEO_DISTORTION_TYPE_WAVE. This is the value that is passed in to the DistortionMesh::Initialize() function.

    VIDEO_DISTORTION_TYPE_HORIZONTAL_WAVE = 0, //! horizontal wave effect

Finally, go into distortion_mesh.cpp, and look for a function called DistortionMethodFactory::GetDistortionMethod(). It basically consists of a switch() statement over all the possible distortion types, and returns a pointer of that type. Add yours, like this:

DistortionMethod *DistortionMethodFactory::GetDistortionMethod(VIDEO_DISTORTION_TYPE type, DistortionGrid *grid)
            return new HorizontalWaveDistortionMethod(grid);
            return NULL;

Phew. Now your new distortion method is ready to use. Note that there's one feature "missing", namely, you can't pass parameters to distortion effects, so any particular effect will always look exactly the same. This is the "interface complexity" vs. "lots of features" tradeoff that I made, feel free to argue if you feel otherwise :)


A fullscreen overlay is nothing more than a big rectangle that gets drawn over the whole screen. You can make the rectangle any color you want, including settings its transparency using the alpha channel. Overlays are used internally by the video engine for effects like screen fading, lighting, and lightning. If you want to draw your own overlay, use the function below:

// draw a 50% translucent red quad over the whole screen
VideoManager->DrawFullscreenOverlay(Color(1.0f, 0.0f, 0.0f, 0.5f));

Note: Don't use this unless you absolutely have to- fullscreen alpha blending will kill the frame rate.


Lightning is a cool, albeit cheap effect that is done by drawing a white fullscreen overlay over the screen and varying its transparency over time. Creating a lightning effect is pretty simple. To start a lightning effect, you do:


And then, every frame, you call:


You should make sure to call DrawLightning() after all the tiles/sprites are drawn, but BEFORE the GUI is drawn, because you don't want lightning effects to affect the GUI.

Note that the video engine only takes care of the visual aspect of the lightning, to play a thunder sound effect, you do that yourself.

You might wonder where this magical ".lit" file comes from. Basically, I will take care of creating the .lit files myself, but if you're curious about how they are created, I wrote a tool (Windows-only) which reads a PNG and converts it to .lit. I could have written the tool to be cross platform but it was quicker this way, and I don't really imagine we'll need more than 2 or 3 good lightning effects, so I can easily take care of that myself. The PNG is just a graph showing the intensity over time, and looks something like this:

Video engine lightning intensity.png

Lighting and Fog[edit]

I decided to explain lighting and fog together in this section, because they are pretty closely tied, and it may be hard to understand the difference between them.

Fog To create fog, you specify a color, an opacity, and then it covers the entire screen. Here is an example:

VideoManager->EnableFog(Color::gray, 0.5f);

Every EnableFog call MUST HAVE AN ASSOCIATED DisableFog when your drawing routine ends, so the fog does not carry over to other drawing modes. This covers the screen in a layer of gray fog which is 50% opaque. Pretty self explanatory :)

Scene lighting (a.k.a. "sky lighting") Scene lighting takes each pixel on the screen and multiplies it by the scene's color. The default scene lighting is Color::white, which doesn't do anything, because multiplying any color by white produces the same color.

The main use for scene lighting is for producing night scenes, or sunrise/sunset. Make sure you never set any of the RGB components to zero. For example if you set the scene color to red (i.e. R=1.0f, G=0.0f, B=0.0f), then anything which doesn't have any red in it (for example, a completely blue object) will appear black.

Scene lighting is pretty easy to use. You simply call the SetLighting() function as below:


When to use lighting vs. fog There are 3 big differences between the two effects:

  • Lighting is cheaper than fog. So if you need to make a night scene, use lighting instead of fog!
  • Fog is... foggier. When you apply lighting to a scene, its color changes, but the screen still looks crystal clear. When you use fog, the screen loses some contrast, so it looks more "hazy" or "foggy".
  • Scene lighting cannot brighten the screen. The brightest the screen can get is when you use a lighting color of white, and that produces just "normal colors". If you make the screen red, you're actually not making it more red- you're making it less green and blue. If you want to brighten a dark scene, you could use fog, but that doesn't really make it look bright, it just makes it look like a foggy night. What you could do is use a fullscreen overlay with additive blending, although that's really expensive. The cheapest way to do it is to use pixel shaders, but that only works on cards which support it.

Note that once you set a lighting color for the scene, any image you draw will be affected!!! If you want to draw an image which doesn't use the scene color, then you should pass Color::white to the Draw() function, like this:


Whoops- for now the only way to do this is by doing VideoManager->DrawImage(my_image, Color::white). I will add a color parameter to the StillImage and AnimatedImage Draw() functions so that the above code works.

Halos Halos are very simply, just an additively blended image that you draw on the screen. Sadly that's really all there is to it! This may be useful for example, if you want to draw a halo for a torch. Another really nice effect is if you have some kind of campfire that emits a large glow. This kind of effect is used in a lot of anime and makes the image a lot more interesting and warm.

Lighting tiles Lighting tiles are also another really simple thing, and not really even part of the video engine. They are simply special tiles that the map designer places in the map editor, which are white-colored, and use alpha blending. Typical uses of lighting tiles might be for

Point lights Point lights light up an area that was darkened by scene lighting.

This is difficult to explain, so here is an image:

Video engine halo lights.jpg

If your monitor's brightness isn't too high, hopefully you can see what's going on here. Basically we have a scene which is really dark because the scene color is dark. Then, we're taking a rectangular, transparent rectangle and applying it to the scene. In the case of a halo, what happens is, we add white to the existing image, but that doesn't actually brighten the background. Now it's just like a white rectangle on top of a dark background. It even looks a bit like fog, and for a good reason. Mathematically, fog works by darkening the color of the existing scene, and then adding white to it. However, when you look at the real lighting, it actually brightens the colors of the scene!

As another example, imagine that the scene was COMPLETELY black, and you were just wandering around the map with a torch. (For example, the caves in dragon warrior 1). If you drew a halo surrounding the main character, then all you would see is a grayish circle on the screen. However, if you drew a real light at the position of the player, then you'd see the the player properly.

The downside is that point lights are extremely expensive. In particular, using 1 point light (even a tiny one) adds a big performance hit as compared to zero real lights. However, once you draw one real light on the screen, drawing additional ones is very cheap.

Real lights are implemented using something called "rendering to a texture", which unfortunately means that the drawing order becomes a little goofy. Say you are trying to draw using real lights in map mode... Instead of just drawing the map, and then drawing the lights on the map, you have to do add 4 steps at the beginning, so it becomes:

// Step 0 is when player enters a map 0. Enable point lights

// Steps 1-5 describe the MapMode::Draw() 1. Determine which lights are currently on-screen (not necessary but more efficient) 2. Draw all the real lights to the screen 3. Accumulate all the lights into one texture(Moved into the apply function) 4. NOW, draw the map (tiles, characters, what not) 5. Apply the lighting texture 6. Disable Lighting

So, let's go step by step...

Step 0- The lighting texture is only created if you call VideoManager->EnableRealLights(true). I named this "step 0" to emphasize that this step isn't done every time the map is drawn, like the others are. The correct time to call EnableRealLights() is when the player goes on to a new map. If the map doesn't have any real lights then this should be set to false to preserve texture memory, otherwise true.

Step 1- Take all the lights for the current map, and determine which ones are actually on the part of the map which is currently being viewed.

Step 2- Loop through all the visible lights and draw them as shown below:

VideoManager->DrawLight(light_image[i], x, y, light_color[i]);

Step 3- Accumulate all the lights into one texture:

VideoManager->AccumulateLights();(Moved into the ApplyLightingOver function)

Step 4- Draw the map

Step 5- Apply the lighting texture that was created in step 3


Step 6- Disable Point lights when your done drawing, so the video state is left in the same state you found it in.

Not too difficult, you just need to be aware of the screwy drawing order.

How to draw GUI without lighting and fog When you use lighting and fog, you have to be a bit careful to turn those settings off when you draw the GUI!!

Some pictures Here is a picture that uses scene lighting, halos, and real lights. The scene lighting is used to make the background bluish/blackish. The real lights are the big orange circles. Finally, the halos are the yellow, small circles. As you can see, a good effect is to place a halo and then a real light to illuminate the area surrounding the light.

Video engine lighting example.jpg

Pop quiz... Is the image below using scene lighting or fog??? (Answer: It's using fog. You can tell because it looks... foggy).

Video engine fog example.jpg

Example of using tile-based lighting... The rays of light can be animated as well, making a really nice effect.

Chrono trigger tile lighting.jpg

Screen Fading[edit]

You can fade to the screen by using the FadeScreen() function, passing it the color you want to fade to, as well as the duration of the fade, in seconds.

VideoManager->FadeScreen(Color::white, 0.5f); // fade to white in half a second
... half a second later ...
VideoManager->FadeScreen(Color::black, 0.5f); // fade back to normal colors

There are a few things to note:

  • Fading to black is MUCH more efficient than fading to other colors
  • If you start a new fade while another one is in progress, the current fade will stop and the new one will start.
  • GameVideo::IsFading() can tell you if the screen is in the middle of a fade

Screen Shaking[edit]

The ShakeScreen() function takes 3 parameters: a “force” , a falloff time, and a falloff method. The force represents how much the screen should be displaced and is measured in pixels. The falloff time is the period of seconds over which the shaking falls off to zero. If you pass in zero, it will continue forever, until you call StopShake().

VideoManager->ShakeScreen(5.0f, 0.5f, VIDEO_FALLOFF_NONE); // small shake for half a second w/ constant force

Since we passed in VIDEO_FALLOFF_NONE, the shake above just uses a constant force. We can make some interesting effects by using other falloff modes though. Here is a picture which shows what all the different modes look like:

Video engine shake types.png

You can also “composite” multiple shakes together simultaneously. As an example, let's say you cast a meteor spell... We might want to have a general “rumbling” shake going on, and then a big impact shake when meteors hit. So here's how we could code that:

// at beginning of meteor spell:
VideoManager->ShakeScreen(3.0f, 0.0f); // constant, small rumbling
// any time a meteor strikes:
VideoManager->ShakeScreen(50.0f, 0.5f, VIDEO_FALLOFF_SUDDEN); // SUDDEN is good for “impact” shakes
// at end of meteor spell:
VideoManager->StopShaking(); // turn off the rumble effect

Transition Tiles[edit]


Transition tiles are basically any tile type which are actually composed of multiple tiles, one for each possible transition. Hopefully the idea of transitions is pretty self-explanatory, but if it's not, here is an example:

Video engine transition tiles example.png

To create this pond, there's not just 1 water tile, but many. When the map designer lays out water in the map editor, the editor automatically figures out what transition to use based on the tile's connection to its neighboring tiles.

The video engine supports two types of transitions: autotiling and blend masking.


So, what is an autotile? Well, nothing special really. It's just one big image which contains all the transition tiles inside it. The picture below shows the autotile on the left, and to the right it shows what the individual tiles look like when you split them out.

Video engine autotile example.png

The reason we group them all into 1 image is basically to maintain the map designer's sanity. Especially for an ANIMATED autotile, suppose it has 5 frames of animation, and 12 transitions, that's 60 image files, which is a bit of a pain.

Here is how you load an autotile:

VideoManager->LoadTransitions("water_autotile.png", tiles);

Very simple :) The LoadTransitions() function just slices the water_autotile.png into tiles and dumps them into the tiles vector that you passed in. As for how you actually draw autotiles or how to write the map editor logic to figure out which transitions to use, that is not part of the video engine. Also note that all the images you load with LoadTransitions() should be freed using StillImage::Delete().

Blend masks

Blend masks allow you to take a single tile, a set of "blend masks" which show each transition, then multiply them together to produce a set of transition tiles. Here is a picture below:

Video engine tile blend example.png

The blend mask image looks exactly like the autotile image except it's grayscale instead of actual colors. Note that artists do NOT have to bother with the alpha channels for blend masks, you just use grayscale colors, and the video engine will automatically convert those into transparency values.

Here is the code for loading a blend masked transition:

vector <StillImage> tiles;
VideoManager->LoadTransitions("grass.png", "blendmask.png", tiles);

Here, "grass.png" is a single 32x32 grass tile, and "blendmask.png" is a big image containing all of the blend masks.

When to use autotiles vs. blend masks

The main advantage of blend masks is that they are re-usable. So if you make a grass tile, and then make transitions for it using a blend mask, then when you make a dirt tile, you can just reuse the same transitions you made for the grass. Also, you can use multiple blend masks on the same tile to create interesting effects. For example, below you can see 2 examples which both basically use grass tiles, but different kinds of blending.

Video engine transition tiles example2.png

The main limitation of blend masks is that they can't do complex transitions. For example, if you look at the water tiles from the "Autotiles" section, note that the water is surrounded by a border of dirt. Since the dirt is there, each transition tile has to be hand-made. So in summary, use autotiles if you need complex transitions, otherwise use blend masks.

Animated transition tiles

The video engine doesn't provide any support for loading animated transitions, so you need to do that yourself if you want that ability. It's very easy, just load all the transitions as still images, add them as frames into AnimatedImages, and then delete the old still images since they were only needed as an intermediate step. (Remember how the reference counting works!)