Wednesday, August 25, 2010

Epiphanies

So after four months of ever-so-slowly moving forward on cell blending, I had finally see the light at the end of the tunnel. Just in time to have a sudden inspiration on how to do it all easier (sort of).

The goal: enable heightmap and texture blending between cells.
It sounds innocent enough - just fine affected mesh vertices and blend away! Four months of slow work (broken by countless TF2 sessions and some monumental Minecraft bases), and I've finally got step 1 working. There's still some clean-up work on step 1, but that'll be easier and the motivation is now returning.

The problems: The previous major version did a couple things wrong. I got heightmapping and texturing working over the entire sphere using a Miller projection. This had potential, but lacked in several areas: the 'hexness' of the map was lost and it required a very, very high resolution texture to look decent.

I wanted to maintain the hex feel of the map, yet be able to blend heightmaps to create continuous mountain ranges, sea beds, and other land features. Each cell has the potential to align differently in UV space meaning that seams wouldn't match up easily like a flat 2d grid. The first idea was to add more texture channels to each mesh vertex:
  1. Base cell UV's
  2. Neighbor channel 1 (neighbors 1 & 3)
  3. Neighbor channel 2 (neighbors 2 & 4)
  4. Neighbor channel 3 (neighbors 3 & 6)
  5. Optional detail channel 1 (micro detail - bumpmap, etc...)
  6. Optional detail channel 2 (macro, miller projection - color shifts, shading...)
Then to write the HLSL shader to handle the custom vertex format...

Then to write all the testing tools to ensure that the mesh is being build correctly...

The testing tools allowed me to achieve a nice unwrap of the icosahedron mesh and from there to write a 2d-map importer... This step turned out very useful. I can open up Photoshop, set an indexed color palette, and draw the map over the top of an exported unwrap of the cell points, then read it back into the game.
Finally, the blending step. The problem lies in that each cell is independently textured. This will ensure better resolution when zoomed in without massive textures and help to maintain the hex board game feel. The first plan went something like this:
  1. Create base cells
  2. Build base UV coordinates as a single large hexagon or pentagon
  3. Find and sort neighbor cells
  4. Start building render mesh
    • Subdivide each cell for the render mesh.
    • Lookup neighbor mesh vertices 'affected' by each cell vertex.
    • Take affected vertices appropriate texture channel (each vertex has 4+ texture channels - base, neighbor 1, 2, and 3) and align to the cell edge. This is the part that proved to take the longest and I've still got the texture blending ahead.
The epiphany here is to incorporate polar coordinates for better affected vertex lookup and UV texturing. Hexes bordered by a pentagon have some alignment issues with the pentagons along neighbor seams due to the pentagon's having different base angles. Switching to building mesh UV's off polar coordinates will solve this issue as well as some of the slight texture warping that will occur.

Edit:

Scrap that, polar's didn't work for helping with UV mapping. Things get too funky around the poles.

Random Picture

Wednesday, August 4, 2010

ASP.Net MVC ImageActionLink

The default Html.ActionLink() only takes text, and I like doing icons often to save space for for aesthetics. Here's a simple helper that takes the image url and action as the only required parameters and C# 4.0's fancy new optional parameters for the rest. Doing all the various method overloads for ActionLink sounded like overkill.

/// <summary>
/// Returns an anchor element to the virtual path of the specified action with an image for the content
/// </summary>
/// <param name="html"></param>
/// <param name="imageUrl"></param>
/// <param name="action"></param>
/// <param name="altText"></param>
/// <param name="controllerName"></param>
/// <param name="routeValues"></param>
/// <param name="htmlAttributes"></param>
/// <returns>MvcHtmlString</returns>
public static MvcHtmlString ImageActionLink(
    this HtmlHelper html, 
    string imageUrl, 
    string action, 
    string altText = null, 
    string controllerName = null, 
    object routeValues = null,
    IDictionary<string, object> imageHtmlAttributes = null,
    IDictionary<string, object> htmlAttributes = null)
{
    MvcHtmlString link = html.ActionLink(
        "[replace_me]", 
        action, 
        controllerName, 
        routeValues, 
        htmlAttributes);

    TagBuilder builder = new TagBuilder("img");
    builder.MergeAttributes(imageHtmlAttributes);
    builder.MergeAttribute("src", imageUrl);
    if (altText != null)
        builder.MergeAttribute("alt", altText);
   
    return MvcHtmlString.Create(
        link.ToString().Replace(
            "[replace_me]",
            builder.ToString(TagRenderMode.SelfClosing))
    );
}

Monday, August 2, 2010

XNA 4 Beta

Microsoft's been pushing Windows Phone 7 development quite hard for the past few months. The XNA and Silverlight teams have been pushing out news, examples and generally evangelizing it. I don't have much interest in phone development yet, but the latest version of the Windows Phone 7 SDK included the XNA beta (and is the only place to download it). The team's been hard at work updating XNA for greater cross-platform development and cleaning up some of the oddities in the API. In switching OTD over to the beta, I found a fair number of issues that either haven't been implemented or just moved somewhere new. I actually gave up after a while and started up a new dummy project to walk through the basics and start from a cleaner code base.

Changes

This is by no means a comprehensive list, but a short list of things that I've encountered and my solutions (if any) so far.

  • Content folder is now a separate project by default. Cool, this works, no problem here.
  • BasicEffect no longer uses SetParameter, but rather all parameters are properties.

BasicEffect

Shawn Hargreaves talks about two changes to Basic Effect:

Rendering Sample

Below is a simple object generated from the pipeline and consists of just a VertexBuffer and IndexBuffer. We reference the BasicEffect and Graphics Device statically. View, World, and Projection are much simpler as properties now. Within the effect pass loop, we no longer begin() and end() the effect and passes, but only Apply() each pass at the start. Less code for less mistakes.

The way vertex and index buffers are set on the device has changed as well to a simpler interface. Vertex buffers contain the information of their contained format and a single stream has only on vertex type. We set the length now by number of vertices and indices rather than byte sizes. The call to set the vertex buffer is much simpler.

class Sphere
{
    public VertexBuffer Vertices;
    public IndexBuffer Indices;

     public void Draw(Matrix view, Matrix projection)
     {
         // Set shader projections
         Core.BasicEffect.View = view;
         Core.BasicEffect.World = Matrix.Identity;
         Core.BasicEffect.Projection = projection;

         // Set vertex and index buffers on the device
         Core.GraphicsDevice.SetVertexBuffer(Vertices);
         Core.GraphicsDevice.Indices = Indices;

         foreach (EffectPass pass in Core.BasicEffect.CurrentTechnique.Passes)
         {
             pass.Apply();

             Core.BasicEffect.GraphicsDevice.DrawIndexedPrimitives(
                 PrimitiveType.TriangleList,
                 0,
                 0,
                 Vertices.VertexCount,
                 0,
                 Indices.IndexCount / 3);
         }
     }
 }

Content Pipeline

When building vertex buffers in the pipeline, a few things have changed here as well. Since vertex formats are now embedded in the buffer object, it takes an extra bit of initialization. A new class, VertexBufferContent is now needed.

VertexBufferContent

Building the VertexBufferContent is pretty simple since the information is available in the vertex format - we just need to loop through the VertexElements.

// Generate the VertexDeclarationContent from the VertexElements in our format
VertexDeclarationContent vdc = new VertexDeclarationContent();
foreach (VertexElement ve in VertexPositionColor.VertexDeclaration.GetVertexElements())
{
    vdc.VertexElements.Add(ve);
}

// Set the declaration fors the buffer
Vertices.VertexDeclaration = vdc;

Code

Here's the whole pipeline object that gets processed.

public class SphereContent
{
    public VertexBufferContent Vertices;
    public IndexCollection Indices;

    public SphereContent(int subdivisionLevel)
    {
        Vertices = new VertexBufferContent();
        Indices = new IndexCollection();

        // Generate the vertices
        List<VertexPositionColor> vertices;
        List<int> indices;
        Generate(out vertices, out indices);

        // Generate the VertexDeclarationContent from the VertexElements in our format
        VertexDeclarationContent vdc = new VertexDeclarationContent();
        foreach (VertexElement ve in VertexPositionColor.VertexDeclaration.GetVertexElements())
        {
            vdc.VertexElements.Add(ve);
        }

        // Set the declaration fors the buffer
        Vertices.VertexDeclaration = vdc;

        // Write to the vertex buffer
        Vertices.Write<VertexPositionColor>(
            0, 
            VertexPositionColor.VertexDeclaration.VertexStride, 
            vertices
        );

        // Add indices
        Indices.AddRange(indices);
    }

    private void Generate(out List<VertexPositionColor> vertices, out List<int> indices)
    {
        // Magic!
    }
}

More coming as I explore further.