Silverlight Custom Bitmap Effects - more HLSL
Written by Mike James   
Monday, 26 July 2010
Article Index
Silverlight Custom Bitmap Effects - more HLSL
Constants and samplers
The default sampler
Embossed effect

Dependency property types

How the dependency property is used to load the register it is associated with depends on its type and WPF/Silverlight currently support the following dependency property types:

  • Double
  • Single or float
  • Color
  • Size
  • Point
  • Vector
  • Point3D
  • Vector3D
  • Point4D

In each case the C# data type is packed into the four elements of the HLSL float4 register.

Banner

In our example the register is going to be used as a color so it makes sense to define a Color dependency property and allow the system to perform the mapping to the float4 register.

Defining the dependency property follows the usual course. First we create a standard set/get property that used the dependency property:

public Color PixelColor
{
get { return (Color)GetValue(
PixelColorProperty
); }

set { SetValue(PixelColorProperty,
value); }
}

Then we create the dependency property adding "Property" to the end of the name:

public static readonly 
DependencyProperty PixelColorProperty =

DependencyProperty.Register(
"PixelColor",typeof(Color),
typeof(BlankEffect),

new PropertyMetadata(Colors.White,
PixelShaderConstantCallback(0)));

Notice that this is a perfectly normal dependency property apart from the use of the PixelShaderConstantCallback(0) which connects the property to the c0 register.

Now we can create an effect object, set the dependency property and use it to modify the way a button displays.

For example:

BlankEffect BE = new BlankEffect();
BE.PixelColor=Color.FromArgb(255,0,255,0);
button1.Effect = BE;

We have set the color to full green.

All of these modifications and additions are to the BlankEffect class created in the first article. The complete listing of this class is:

public class BlankEffect:ShaderEffect
{
DependencyProperty PixelColorProperty =
DependencyProperty.Register(
"PixelColor", typeof(Color),
typeof(BlankEffect),
new PropertyMetadata(Colors.White,
PixelShaderConstantCallback(0)));
public Color PixelColor
{
get
{
return (Color)GetValue(PixelColorProperty);
}
set
{
SetValue(PixelColorProperty,value);
}
}
public BlankEffect()
{
PixelShader pixelShader=new PixelShader();
Uri uri = new Uri(@"/SilverHSL;
component/shader/shader1.ps",
UriKind.Relative);
pixelShader.UriSource = uri;
this.PixelShader = pixelShader;
}
}

Notice that you have to modify the pack URI to make sure it gives the location of the shader file .ps. This file also has to be added to the project and it has to be set to a recource - see the previous article for details of how to set everything up correctly.

Finally for it all to work we need to modify the shader code:

float4 pixelcolor:register(c0);
float4 main(float2 uv:TEXCOORD):COLOR
{
  return pixelcolor;
}

Remember to save and compile the shader .fx file to create an up-dated .ps file.

If you now run the program you will see a green block appear in place of the button's usual rendering.

Samplers

So far all we have managed to do is pass a constant value to the shader and return it as the color of the pixel. We obviously need to gain access to and process the pixels that rendering the control actually produces.

The key to this, and generally with working with bitmaps in shaders is, the sampler. 

A sampler is a, usually small,  bitmap that is stored in video memory. A sampler can be used in many different ways.

For example, if you have a small bitmap of a section of texture, fur say, you can use it to map onto another object as it renders to the screen.

This is the original and most common use of the sampler and it is the reason a sampler is called a sampler - i.e. it allows you to sample a texture. However, it is also possible to use samplers for many other purposes including just rendering an image to the screen.

Samplers are passed to an HLSL program using registers. In pixel shader 2.0 you can use up to 16 shaders specified in S0 to S15 but WPF/Silverlight limits you to using a maximum of four.

Using a shader follows the same steps as using a constant.

In the HLSL program you first declare a variable and associate it with a shader register. For example:

sampler2D bitmap1:register(s1);

sets up  the variable bitmap1 as a sampler2D data type and associates it with register s1. Following this declaration you can work with bitmap1 as if it was a 2D bitmap sampler.

In the C# program you have to create a  dependency property of type Brush and associate it with the shader register using the special RegisterPixelShaderSamplerProperty static method which is supplied by the ShaderEffect class.

That is, to make the connection between the bitmap that is represented byt the Brush and the sampler register you have to register the dependency property in a special way.

Once you have the dependency property and the sampler setup you can set the dependency property to a suitable bitmap within your C# program and work with it as the sampler in you HLSL program.

Let's see each step in action by using a sampler to define what is rendered for a button object.

First let's create the shader program:

sampler2D bitmap1:register(s1);
float4 main(float2 uv:TEXCOORD):COLOR
{
  float4 color=tex2D(bitmap1,uv);
  return color;
}

This associates the variable bitmap1 with sampler register s1. In the body of the function we make use of the function tex2D which takes a sampler as its first parameter and a texture coordinate as its second parameter. The function returns the color of the pixel at the coordinate specified by uv and this is returned as the colour of the rendered pixel. Hence we are simply transferring the image in the sampler to the output target.

Texture co-ordinates

At this point we need to understand texture coordinates a little better.

Texture co-ordinates always work in the same way. The top left-hand corner is (0.0) and the bottom-right is (1,1) - irrespective of the number of pixels in the graphic.

What this means is that texture co-ordinates always specify a point within the graphic and graphics are automatically scaled to fit the area they are being mapped to. 

In this case the input texture co-ordinate uv which is passed into the shader is a point in the area to be rendered i.e. in this example the button's render area. The same (0,0) to (1,1) co-ordinates are mapped to the sampler's entire area with the result that the entire sampler is mapped to the entire button render area.

Banner

<ASIN:0672333368 >

<ASIN:1430272074 >

<ASIN:1430229799 >

<ASIN:1847199844 >

<ASIN:143022455X >



Last Updated ( Tuesday, 27 July 2010 )