This is the last post in this segment of blogs! I will be doing an overview of the material we went over in this class. So lets get right to it.
We learned about shaders and shaders are small programs in the graphics pipeline that give more freedom to the original graphics pipeline. We have three main shaders that we covered in this course, Vertex Shader, Geometry Shader, and Fragment Shader.
The Vertex Shader takes in vertices along with other attributes and uniforms. Then the program does whatever the programmer likes but they usually manipulate the placement of the vertices in a scene. They then output the vertices along with their attributes.
Next in the pipeline is primitive assembly, this is where the triangles are formed and those are passed to the Geometry Shader. The Geometry Shader is used to add geometry to already existing geometry which is mainly used for tessellation The output would be more vertices or other geometry. This is now passed to our good friend the Rasterizer who gives pixels to our geometry. The shapes use tri linear interpolation between the vertices of the triangles to assign data to each pixel.
The Fragment Shader retrieves these pixels along with the uniforms and any attributes. Then inside the shader is up to the programmers discretion but they are normally manipulating the color of the pixels. The output then would be color of the fragments.
An often used tool with shaders is a frame buffer object or an FBO. There are also vertex buffer objects and vertex array objects or VBO's and VAO's. These are just sources of memory that are either samples of the screen as a texture or an array of vertices for an object.
Now we also went over many different algorithms and shaders during the course of the semester. Here is a list of the different shader techniques:
-Lighting
-Blur/Motion Blur
-Bloom/HDR
-Toon Shading
-Shadow Mapping
-Deferred Rendering
-Mesh Skinning
-Normal Mapping
There are four different types of lighting: Diffuse, Ambient, Emissive, and Specular. Diffuse lighting requires normals and is calculated by the dot product of the normal vector of a vertex and the vector from the light to the vertex. Ambient is the color of the light. Emissive lighting is the light coming from an object. Specular lighting is calculated by the dot product of the vector from the light to the vertex by the relfected ray vector.
With blur there are two different types we learned, Box and Gaussion. They are done by sampling neighboring pixels and blending them together. Box has uniform weights around the pixel being sampled and Gaussian has a gradient fall off on the weights. The sum of the weights equal 1. There can be an added bright pass which illuminates bright pixels more and makes dark ones darker. Motion blur is done by accumulating frames and blending them together.
Toon Shading is done by sampling from a blocky gradient and applying the level of light to certain gradients. If the level falls into a certain level then use that sample.
Shadow Mapping refers to the depth buffer or Z buffer relative to the light source. It creates a vector from the light to the depth buffer. From the camera perspective we send the position of the fragment in question to the shader. Then we create a vector from that pixel and light and check, if the depth vector is less than the pixel vector, draw that pixel grey, else draw normally.
Deferred rendering is to optimize calculations that forward rendering does. We use this to be able to use several lights in a single scene. Way more can be said about this but my understanding currently is still weak about it.
Mesh Skinning is mainly used in the vertex shader as we are manipulating the vertices of the object being drawn. With mesh skinning, we can deform the skin of a model relative to the weights assigned by an artist to the joints in the skeleton.
Normal Mapping is used to make low poly models look like high poly models. This is done by sampling the normals of the high poly model and applying them to the fragments of the low poly model. There are two different types of normal mapping, object and tangent space normal mapping. Object is relative to the object being normal mapped and tangent is relative to the tangent space of the normal map.
Honorable mention goes out to Displacement mapping as we discussed it but rarely went into depth about it.
This wraps up our class of Intermediate Computer Graphics, I learned so much about this class and enjoyed it so much.
Computer Graphics
Friday, 12 April 2013
Motion Blur
This week's blog is going to go over a simple shader that gives the effect of motion blur! So allow me to give a brief explanation of what motion blur is. Motion blur is the streaking of rapidly moving objects in a still image or a sequence of images such as a movie or animation. This happens when the image being recorded changes during the recording of a single frame, either due to rapid movement or long exposure. As you can see in the photo above, this is not the case. The reason why you can tell is because according to the definition of motion blur, everything moving in the image would be blurred. There is lots of places in the image that are very in focus, this means that the photo was taken and it was perfectly clear. Then someone took it into photoshop and applied blur to it in specific spots.
Now remember what we know about photoshop and shaders, that everything you can do in photoshop can be programmed into a shader! First you might be asking yourself, why would we even want this? Most time people are trying to get rid of motion blur in their photos and videos. Everyone's looking for crystal clear imaging nowadays so why add motion blur? We would add this motion blur to games because we are looking to add a sense of realism to the game. This kind of blur occurs naturally and therefore, makes a games graphics look more natural.
Now remember what we know about photoshop and shaders, that everything you can do in photoshop can be programmed into a shader! First you might be asking yourself, why would we even want this? Most time people are trying to get rid of motion blur in their photos and videos. Everyone's looking for crystal clear imaging nowadays so why add motion blur? We would add this motion blur to games because we are looking to add a sense of realism to the game. This kind of blur occurs naturally and therefore, makes a games graphics look more natural.
Here is another image that demonstrates good motion blur. This could have actually been the raw image since all the static objects in the scene are in focus while the moving objects have motion blur on them.
I found this image while searching for good motion blur photos for this blog. Thought it was funny so I shall share it.
Now below is the difference between having motion blur in games and not. As you can see, when motion blur is off it looks good because the graphics in general are alright. The difference motion blur makes on the left image be more natual and visually appealing.
This effect can easily be applied in photoshop with a large arsenal of tools at the developers disposal.There are all sorts of blurs and they even have a nice motion blur tool such as the one below.
Here the developer has access to the angle of motion blur and how far the blur will streak. When programming this tool, we can give the same controls to the developer. To start, the angle of blur will be dynamic. So no matter which way the objects are moving they will cause blur in the direction they are moving. Now, if this wasn't the desired effect or you wanted to constrain the areas where objects have motion blur, then these conditions will be covered with conditional statements on the CPU side of things. As for the amount of blur or the distance in which the image will be blurred, that is controlled by how many frames we add together for the blurring effect.
When it comes to the actual programming of this, there are many different ways it can be done. The first way I'm going to cover is all done entirely on the CPU using an OpenGl function called glAccum. Now what we are going to do with this is we are going to store a certain number of frames into an accumulation buffer. The amount is up to how much blur we want to have. Then we will check to see if we have accumulated enough frames and once we have then will blend them together and reset the frame counter and draw. This is a slow but extremely simple way of doing it. http://www.cse.msu.edu/~cse872/tutorial5.html A link is provided if you are interested in seeing line by line how it works.
The other way to do this is on the GPU side in shaderland. This way is a little more annoying but it is still fairly simple. So what we are going to do here is manually do what we did on the CPU but instead have it occur on the GPU processed in a fragment shader. We are going to need those frames though. Let's use our favourite tool, FBO's, and let's use a bunch. We will take about five of them, yeah that should be good. We will slap four frames from the screen onto four of the five framebuffers. Now lets send these textures through the pipeline and put that fragment shader to work. We're going to have the shader mix all of the frames together and return one final fully blended version. This will give the bluring effect we need when an object moves. We would still apply the same logic as we had on the CPU. There are even more efficient ways of creating motion blur than that but those are two different ways it can be done.
UOIT Game Con 2013
Welcome to my blog all about Game Con 2013 at UOIT!
Here below is our station in full set up below. At our station we had set up initially one laptop with the player using the keyboard and the laptop speakers for sound. As our group slowly crawled out of bed and got to the event our station became better and better. We also collected a few additional resources as the day went on. Eventually we graduated from the laptop to a monitor screen and a small speaker. We evolved into a station with two playable games running and controllers on each. But, when we really felt we "Leveled Up" was when we found a projector being unused next to our station. We set this up with one of our laptops running the game and it really gave POP to our station. There were also posters and business cards and cookies to give extra life to the already awesome station. When we saw others bringing TV's and big monitors we new we had to step it up and stand out and that's exactly what we achieved.
Just like at level up we were still programming the game and adding more to it. This time we made our two running games uniform as well as adding a much better background for the player to view as they played the game. From the feedback at level up we also made some minor adjustments such as the size of the player and some other minor details. There were many great suggestions that were given to us from the players at Game Con. Most of the good ones came from the professors, for example, our best suggestions came from Brian and Brent when they did a preliminary assessment of our games audio. They gave suggestions on how to further improve our sound effects in game such that the audio would be 3D. We plan to implement these suggestions by Friday. In terms of gameplay suggestions we had suggestions towards balancing and reminders of what to do on screen for the player like what buttons did and what they have to do. These were all great suggestions and they are all taken into account when preparing the final product.
Here we have the awards table with Mario announcing the winners of the awards at Game Con. The following pictures are the winners of the awards this year.
Here we have to the left, second year game of the year. This team developed A Case Of The Mondays .
Up next is the third year game of the year as well as best overall gameplay. This team developed Red Dawn.
Sedona took home the best visual art in the art show above the Game Con with her League of Legends concept art.
I apologize for not having a picture of the first year game of the year team but the team who won that award made the Super Smash Bro's type game.
Well that about wraps up Game Con this year! Unfortunately my studio did not take home any awards but we are determined to try harder next year and take home the gold. Similar to Level Up, our team had a lot of fun at this event and were happy to be apart of it.
Monday, 8 April 2013
Normal Mapping
This blog will cover the graphics topic of normal mapping, also known as bump mapping. No, there is no difference between normal mapping and bump mapping other than the word normal, and bump. If there is one thing I will walk away from this class with and never forget it is that. Thanks for the good laugh Dr. Hogue and students who pretend to pay attention. Anyways, I will cover more detail about normal mapping and how it is done in both object space and tangent space along with the differences between them.
So, a normal map is simply the normals of every fragment on the texture map. More specifically, it is the xyz normals being displayed as rgb values. Mind blown, I know. Here we have the x part of the normal vector being represented as red and the y as green and finally z as blue. Now, looking at these two different textures and knowing what I just explain you can say a few different things about them.
Let's look at the left one first, our object space normal map. Now it's called the object space normal map because all of the normals are relative to the object itself. Remember how I said green is equal to the y portion of the normal? well if you look at the bottom square of the normal map it is entirely green, so I know that this green square is the top face of the cube because relative to the object's space, the normals are all pointing in the y axis. Likewise, the top left square is almost all red, that must mean that square is on the right face of that cube.
Now we look at the tangent space normal map. All the squares look relatively the same in color, this happens because with tangent space, we no longer static to the object itself. The normals are now dynamic to the mesh of the texture. By that I mean if we were to deform the mesh with tangent space normal mapping the bump map will hold true in any deformation the mesh skinning throws at it. The reason why it is mostly all in the z axis or blue value is because the bump map is now done in world space. Here is a quick snippet from a great source on tangent space normal mapping:
If we now calculate U and V in the object local space (as well as U cross V which is the normal to the texture) we can generate a transformation matrix to move the normals from the map into the object local space. From there they can be transformed to world space as usual and take part in lighting calculation. The common practice is to call the U vector in the object local space the Tangent and the V vector in the object local space theBitangent. The transformation matrix that we need to generate is called a TBN matrix (Tangent-Bitangent-Normal). These Tangent-Bitangent-Normal vectors define a coordinate system known as Tangent (or texture ) space. Therefore, the normals in the map are stored in tangent/texture space.
The link is provided at the end if you are interested in more normal mapping. There is all sorts of math and code examples as well in this link plus some background of it. Anyways, the snippet explains it well in regards to how we need to use the tangent, bitangent, normal matrix to generate what we need to put the normals in tangent space.
The question now is, which one do I use? Why wouldn't we always use tangent space normal mapping? Well, you could always use tangent space normal mapping if one desires, you will get the same result for some objects. The only ones you won't have them work for is object space won't work for dynamic meshes that transform. What will happen if you do is called a candy wrapper effect where the normals are being deformed relative to the object and not the world and the skin starts to deform in glitchy ways you don't want. So a good time to use object space normal mapping is when the object or surface is static such as a wall or statue. This is good because it is faster to compute than tangent space and you get the same effect.
Here are the fragment shaders I used to demonstrate both object and tangent space normal mapping:
http://ogldev.atspace.co.uk/www/tutorial26/tutorial26.html
Here is a quick image to show you what an improvement bump mapping can do to graphics. This is a huge improvement on how much more realistic the image looks on the right. The process is a very simple one as well. What needs to be done if one would like to generate a model or surface that has this effect on it would require the process to begin in a modeling software such as Mudbox. Here we will create what we want, say a cube, and we will paint it so it has a texture. Then what we want to do after we have that saved is we want to create a high poly version of this same model. Once we have something that is now in the hundreds of thousands of faces, we can start to add a bump to the model. The way how we would do this is by getting a sculpting tool with a low magnitude and we will pull at the faces to make the model bumpy. We don't want to make it too dramatic because it will perform a poor illusion in the end of the process so nice and little bumps will work great for this. Bump the model in the desired way you would like the model to look then we are going to create a normal map which can be done in either object or tangent space. Here are a couple of examples I made for both:
The one on the left is normal mapping of my cube in object space and the one on the right is in tangent space. These were generated from the texture map generator in Mudbox after I selected both types of normal mapping. Now these examples are very dramatic because I sculpted with a high magnitude brush as you can tell. Now the reason why these two look different is easy to explain. But first let me explain why they look so colorful.So, a normal map is simply the normals of every fragment on the texture map. More specifically, it is the xyz normals being displayed as rgb values. Mind blown, I know. Here we have the x part of the normal vector being represented as red and the y as green and finally z as blue. Now, looking at these two different textures and knowing what I just explain you can say a few different things about them.
Let's look at the left one first, our object space normal map. Now it's called the object space normal map because all of the normals are relative to the object itself. Remember how I said green is equal to the y portion of the normal? well if you look at the bottom square of the normal map it is entirely green, so I know that this green square is the top face of the cube because relative to the object's space, the normals are all pointing in the y axis. Likewise, the top left square is almost all red, that must mean that square is on the right face of that cube.
Now we look at the tangent space normal map. All the squares look relatively the same in color, this happens because with tangent space, we no longer static to the object itself. The normals are now dynamic to the mesh of the texture. By that I mean if we were to deform the mesh with tangent space normal mapping the bump map will hold true in any deformation the mesh skinning throws at it. The reason why it is mostly all in the z axis or blue value is because the bump map is now done in world space. Here is a quick snippet from a great source on tangent space normal mapping:
If we now calculate U and V in the object local space (as well as U cross V which is the normal to the texture) we can generate a transformation matrix to move the normals from the map into the object local space. From there they can be transformed to world space as usual and take part in lighting calculation. The common practice is to call the U vector in the object local space the Tangent and the V vector in the object local space theBitangent. The transformation matrix that we need to generate is called a TBN matrix (Tangent-Bitangent-Normal). These Tangent-Bitangent-Normal vectors define a coordinate system known as Tangent (or texture ) space. Therefore, the normals in the map are stored in tangent/texture space.
The link is provided at the end if you are interested in more normal mapping. There is all sorts of math and code examples as well in this link plus some background of it. Anyways, the snippet explains it well in regards to how we need to use the tangent, bitangent, normal matrix to generate what we need to put the normals in tangent space.
The question now is, which one do I use? Why wouldn't we always use tangent space normal mapping? Well, you could always use tangent space normal mapping if one desires, you will get the same result for some objects. The only ones you won't have them work for is object space won't work for dynamic meshes that transform. What will happen if you do is called a candy wrapper effect where the normals are being deformed relative to the object and not the world and the skin starts to deform in glitchy ways you don't want. So a good time to use object space normal mapping is when the object or surface is static such as a wall or statue. This is good because it is faster to compute than tangent space and you get the same effect.
Here are the fragment shaders I used to demonstrate both object and tangent space normal mapping:
Object Space fragment shader
Tangent Space fragment shader
Now at the end of the day, after all of this has been rendered through the pipeline, the models and textures are being loaded correctly and the shaders don't complain about random typos, there is magic called normal mapping.
That cube from the beginning of the blog.
http://ogldev.atspace.co.uk/www/tutorial26/tutorial26.html
Level Up Showcase 2013
So this week I will talk about my studio's trip to Unity Up-- I mean Level Up. Haha, just kidding U of T you can design games well. But seriously, Level Up was a great experience and my team really enjoyed being there, despite not having a spot and having to create our own station from left over tables and chairs laying around the room.
At level up, there were a variety of different schools participating in the event. Schools from all across Ontario such as UOIT (us), U of T (our rivals) Brock (game of the year winners), and many others such as Centennial, George Brown, Humber, OCAD, Seneca, Sheridan, Trios, and York. The event took place on April 3, 2013 and was between 5-11 for the public audience. For us avid and dedicated game devs, the party started at 10am so we could get everything set up for this glorious event. Now when you think about it, for us in Oshawa who don't want to drive downtown, the day started at 7am so we could get ready and make the commute from the school to the train station and to union plus the time it takes being lost in down town, I will always follow Phil's directions over mine from now on. Then to miss the train by about five minutes and having to wait for the next one plus the drive homes clocks you back at home for 12:30 ish. Regardless of the long day, it remained a great day.
When we got to level up our team first established priorities: find power, and obtain internet access. The game needed some fine tuning for a public audience. Our major fine tune was to make it completely controller accessible. Then once we did that we realized that we have all of our UI showing aids for the keyboard controls so we needed to go in and change the images to work for the controllers. We then tried updating all of the art but time was running out before 5pm so without stable internet our station ended up having two different games art wise. It was ok because only a few people noticed and they were all from our school.
During the actual event it was cool to see my family there and Phil's family show up along with Justin's family having them all play our games. I didn't get much of a chance to go around myself and see the games because I was busy running our station all day. No one else in my group had as much enthusiasm as me to get people to play through our game. We really needed to put an effort in to getting people to play our game because we didn't have a giant T.V or even a computer monitor. We had two laptops with the game running as well as a laptop with our social media set up on it such as our website, facebook page, and twitter feed.
Speaking of Twitter! we had someone follow our studio at Level Up. So proud guys, good job! Unfortunately in terms of Twitter and our business cards, the connect was spelled wrong on the back of the card. That was a let down but at least our email was right on the back. The business cards in general were quite the let down as the cards had no room on them to write extra on since it was all black. They were also very pixelated so it was hard to read in general. What made these cards worse was the fact that all of the information wasn't actually on the back. We were missing the facebook link, phone number, and most importantly the website for our studio! We handed them our none the less because we wanted people to still have something so they can get in contact with the studio.
Since this is apart of my computer graphics blogs, I should probably say some things about the graphics I saw at Level Up. For the most part, UOIT did the best job on "from scratch" graphics and engines. By that I mean, they built there engine and graphics from the use of graphics libraries and shader languages, not some engine source someone else made. This looks impressive but only if you know it is developed from the ground up. When you set it next to a game made from UDK or CryTeck it will be no competition for the source engines. Now most of our competitors at Level Up were using some sort of source engine which eliminated a lot of development time because the engine for their game was already constructed and functioning properly.
Now that I've got that out of the way, the graphics in the other games were really good! They had art styles that matched and color palettes that meshed smoothly. When I did have a brief chance to ask some of the other teams how they obtained such nice graphics they said they used Unity or some other game engine. I would then go on to ask what shaders they had in their game and most of them didn't even know what a shader was! It was disappointing to hear that they didn't know what shaders are because shaders are awesome and could have made their games look that much better(I stretched my arms out really far). Funny thing was, they had certain shader algorithms functioning in their games such as shadow mapping and motion blur and they didn't even notice that it was in fact a shader creating these effects.
At the end of that day, everyone from my studio including myself had a great time and was well worth the hard work and dedication to be there. If there was one thing I could have suggested to improve the event is to have better opportunities to socialize with game development industry professionals. I did have a chance to talk to some reps from big viking games and uken games but there were so many other big companies there like ubisoft and Blizzard Entertainment! Just kidding, I wish Bliz was there. It would've been great to have an opportunity to talk to Ubisoft employees but I didn't get a chance at the night of the event. Talking to Big Viking Games was probably better anyways because they are closer to my level of skill, only change is I would need to learn HTML 5 because that's what they program their mobile games in.
To wrap things up, this was a great experience because I was able to meet industry professionals and I was able to get lots of feedback on my game and developing games. This gave incentive to hopefully produce something over the summer and try to make it to Level Up on crack! Also known as IGF or Independent Games Festival in San Francisco. By the way, (side note for Dr. Hogue, there are two teams that I know of that are taking you up on your deal that you announced in class regarding IGF) there would've been more attempting to go if it was all inclusive.
At level up, there were a variety of different schools participating in the event. Schools from all across Ontario such as UOIT (us), U of T (our rivals) Brock (game of the year winners), and many others such as Centennial, George Brown, Humber, OCAD, Seneca, Sheridan, Trios, and York. The event took place on April 3, 2013 and was between 5-11 for the public audience. For us avid and dedicated game devs, the party started at 10am so we could get everything set up for this glorious event. Now when you think about it, for us in Oshawa who don't want to drive downtown, the day started at 7am so we could get ready and make the commute from the school to the train station and to union plus the time it takes being lost in down town, I will always follow Phil's directions over mine from now on. Then to miss the train by about five minutes and having to wait for the next one plus the drive homes clocks you back at home for 12:30 ish. Regardless of the long day, it remained a great day.
When we got to level up our team first established priorities: find power, and obtain internet access. The game needed some fine tuning for a public audience. Our major fine tune was to make it completely controller accessible. Then once we did that we realized that we have all of our UI showing aids for the keyboard controls so we needed to go in and change the images to work for the controllers. We then tried updating all of the art but time was running out before 5pm so without stable internet our station ended up having two different games art wise. It was ok because only a few people noticed and they were all from our school.
During the actual event it was cool to see my family there and Phil's family show up along with Justin's family having them all play our games. I didn't get much of a chance to go around myself and see the games because I was busy running our station all day. No one else in my group had as much enthusiasm as me to get people to play through our game. We really needed to put an effort in to getting people to play our game because we didn't have a giant T.V or even a computer monitor. We had two laptops with the game running as well as a laptop with our social media set up on it such as our website, facebook page, and twitter feed.
Speaking of Twitter! we had someone follow our studio at Level Up. So proud guys, good job! Unfortunately in terms of Twitter and our business cards, the connect was spelled wrong on the back of the card. That was a let down but at least our email was right on the back. The business cards in general were quite the let down as the cards had no room on them to write extra on since it was all black. They were also very pixelated so it was hard to read in general. What made these cards worse was the fact that all of the information wasn't actually on the back. We were missing the facebook link, phone number, and most importantly the website for our studio! We handed them our none the less because we wanted people to still have something so they can get in contact with the studio.
Since this is apart of my computer graphics blogs, I should probably say some things about the graphics I saw at Level Up. For the most part, UOIT did the best job on "from scratch" graphics and engines. By that I mean, they built there engine and graphics from the use of graphics libraries and shader languages, not some engine source someone else made. This looks impressive but only if you know it is developed from the ground up. When you set it next to a game made from UDK or CryTeck it will be no competition for the source engines. Now most of our competitors at Level Up were using some sort of source engine which eliminated a lot of development time because the engine for their game was already constructed and functioning properly.
Now that I've got that out of the way, the graphics in the other games were really good! They had art styles that matched and color palettes that meshed smoothly. When I did have a brief chance to ask some of the other teams how they obtained such nice graphics they said they used Unity or some other game engine. I would then go on to ask what shaders they had in their game and most of them didn't even know what a shader was! It was disappointing to hear that they didn't know what shaders are because shaders are awesome and could have made their games look that much better(I stretched my arms out really far). Funny thing was, they had certain shader algorithms functioning in their games such as shadow mapping and motion blur and they didn't even notice that it was in fact a shader creating these effects.
At the end of that day, everyone from my studio including myself had a great time and was well worth the hard work and dedication to be there. If there was one thing I could have suggested to improve the event is to have better opportunities to socialize with game development industry professionals. I did have a chance to talk to some reps from big viking games and uken games but there were so many other big companies there like ubisoft and Blizzard Entertainment! Just kidding, I wish Bliz was there. It would've been great to have an opportunity to talk to Ubisoft employees but I didn't get a chance at the night of the event. Talking to Big Viking Games was probably better anyways because they are closer to my level of skill, only change is I would need to learn HTML 5 because that's what they program their mobile games in.
To wrap things up, this was a great experience because I was able to meet industry professionals and I was able to get lots of feedback on my game and developing games. This gave incentive to hopefully produce something over the summer and try to make it to Level Up on crack! Also known as IGF or Independent Games Festival in San Francisco. By the way, (side note for Dr. Hogue, there are two teams that I know of that are taking you up on your deal that you announced in class regarding IGF) there would've been more attempting to go if it was all inclusive.
Frame Buffer Objects (FBO's)
Glad to see you back on our travels through the adventure of graphics in games. Today we're going to explore more magic known as frame buffer objects. FBO's are really cool and useful tools when it comes to graphics as they allow for many fancy shader effects to be done. When I think about an FBO I think magic because as simple as they sound in theory, the actual implementation and integration into a game can be buggy.
In short, the theory behind a frame buffer object is that you take a screen shot of your rendered scene and save the data. Then from there you take the data and work wonders. Countless options could be approached and followed through with once FBO's are working in game. I've noticed that many people in our class have been struggling with this concept of a frame buffer object and it is delaying their production of certain shaders because they rely on FBO's. It has certainly put my group behind on the production of the shaders we want because of FBO problems. Anyways, we got them working recently, to a hacky extent, and are ready to make use of our new tool.
The reason why our FBO is hacky is because we have fixed numbers when creating the frame buffer. If the generation of the framebuffer wasn't hard coded then our game could be displayed in any screen size rather than the one we initially created. We found this out the hard way as we attempted to project our game onto the screen and the framebuffer caused it to be very zoomed in from the smaller resolution.
Now if the framebuffer is generated more dynamically it could solve this problem and give more power to the FBO because it can be properly downsized and resized to support bloom. This could be done through the proper use of the viewport so it can be resized according to the screen's size and resolution of the screen.
Currently, our game needs to be dynamic for Level Up Showcase on Wednesday so we have for now temporarily removed FBO's until after level up.
A frame buffer can be used in many different ways other than simply taking the image and displaying it on screen since it is just data. Thinking way outside of the box here, but since the FBO is just a long stream of data, we could store whatever information we want so we can send it to the GPU for processing of that data. This method can be done the same way a vbo can be manipulated to send data to the gpu. Normally a vbo would send information about the vertices of a model but it is just another way to store and transfer long streams of data in the end of the day.
Other ways I have used FBO's so far in my FBO career is using frame buffers for edge detection with my cel shading program, more framebuffers for motion blur on the GPU, and bloom having a frame buffer for the bright pass, the blur pass, and finally the end resulting composite of the effects.
In short, the theory behind a frame buffer object is that you take a screen shot of your rendered scene and save the data. Then from there you take the data and work wonders. Countless options could be approached and followed through with once FBO's are working in game. I've noticed that many people in our class have been struggling with this concept of a frame buffer object and it is delaying their production of certain shaders because they rely on FBO's. It has certainly put my group behind on the production of the shaders we want because of FBO problems. Anyways, we got them working recently, to a hacky extent, and are ready to make use of our new tool.
The reason why our FBO is hacky is because we have fixed numbers when creating the frame buffer. If the generation of the framebuffer wasn't hard coded then our game could be displayed in any screen size rather than the one we initially created. We found this out the hard way as we attempted to project our game onto the screen and the framebuffer caused it to be very zoomed in from the smaller resolution.
Now if the framebuffer is generated more dynamically it could solve this problem and give more power to the FBO because it can be properly downsized and resized to support bloom. This could be done through the proper use of the viewport so it can be resized according to the screen's size and resolution of the screen.
Currently, our game needs to be dynamic for Level Up Showcase on Wednesday so we have for now temporarily removed FBO's until after level up.
A frame buffer can be used in many different ways other than simply taking the image and displaying it on screen since it is just data. Thinking way outside of the box here, but since the FBO is just a long stream of data, we could store whatever information we want so we can send it to the GPU for processing of that data. This method can be done the same way a vbo can be manipulated to send data to the gpu. Normally a vbo would send information about the vertices of a model but it is just another way to store and transfer long streams of data in the end of the day.
Other ways I have used FBO's so far in my FBO career is using frame buffers for edge detection with my cel shading program, more framebuffers for motion blur on the GPU, and bloom having a frame buffer for the bright pass, the blur pass, and finally the end resulting composite of the effects.
Tuesday, 19 March 2013
Lighting in Bullet Devil
Welcome back to another episode of lighting in games! Today's special we will be serving up lighting from the game Bullet Devil, produced by Studio 8 for the game development workshop (GDW). We will be taking a look at the difference lighting makes in the game using shaders. Then we will go into some detail on the code (just the shaders) on how we go about doing this. Let's get to it.
So lets start by showing the game as it was before we introduced it to the magical world of shaderland. The image below shows a very plain and almost 2D looking game without any source of shader power. The main character model is textured red as a temporary texture because the models are changing anyways for the game.
So lets start by showing the game as it was before we introduced it to the magical world of shaderland. The image below shows a very plain and almost 2D looking game without any source of shader power. The main character model is textured red as a temporary texture because the models are changing anyways for the game.
Now it's hard to see in these pictures because they are so small, so I recommend you go and play the game, but there is shading on a per fragment basis applied to the models in the game. Now if you don't know what per fragment lighting is then i suggest you go back to my previous blog and do some reading!
Time for the fun details on how this was accomplished. Again, the theory is all explained in my previous blog so I'm just going to jump into how we did it. Most of the reference for how we did this comes from what we learned in the tutorial with some modifications. Those modifications were to change the per vertex lighting model shown in class and make it per fragment lighting. We also have shadow mapping developed into the shader but won't be used until the FBO's are properly working. If you don't know what an FBO is read on to my next blog where i discuss frame buffer objects.
Alright, now when we were taught this the version used is 120. We plan to continue to use 120 to the end of the semester. Next year our team will step up to a better version like 330 or 420 but for now, 120 is doing the job for us. Here is some sample code of our vertex shader:
Next is the sample code for our fragment shader:
So as you can see, it stays very close to what I explained in the last blog. Only thing that's there that I haven't explained in detail is the shadow mapping. I did touch on it in my post processing blog but when it is actually in game I will have a blog for that. At the end of the day it has it's own math for the algorithm and it can simply be added to the end result of the per fragment lighting in game to give the proper results.
Subscribe to:
Posts (Atom)