生化危机6不能玩s出现unsupported pixel pixelshader3.0 version detected 怎么办?

第二次万生化危机5时,出现:ERRO6:Unsupported pixel shader version detected.2.0_百度知道
第二次万生化危机5时,出现:ERRO6:Unsupported pixel shader version detected.2.0
求解,现在就显示ERRO6了,之前还可以玩的第二次玩之前做过一次系统
我有更好的答案
显卡驱动不合适,装合适的。
其他类似问题
生化危机5的相关知识
等待您来回答
下载知道APP
随时随地咨询
出门在外也不愁为什么打开鬼泣4DX9时会出现: ERR06:Unsupported pixel shader version detected 1.3的?_百度知道
为什么打开鬼泣4DX9时会出现: ERR06:Unsupported pixel shader version detected 1.3的?
提问者采纳
已经成为用pixel shader 3。现在都有支持pixel shader 4,要求显卡硬件必须支持SM3楼下整个一打酱油来的
ERR06.0 :UNSUPPORTED PIXEL SHADER VERSION DETECTED 1:不支援支持Pixel Shader 1.0和DX9。所以你只有换显卡,的阴影渲染效果制作的了.3版本检测也就是说显卡不行鬼泣4是2008年下半年的游戏.3汉语是ERR06 .0 的显卡了 鬼泣4.0C这两个要求N卡是6600GT开始所以除了更换显卡
提问者评价
其他类似问题
shader的相关知识
按默认排序
其他1条回答
卸载重装试试
等待您来回答
下载知道APP
随时随地咨询
出门在外也不愁From OpenGL.org
Quite a few websites show the same mistakes and the mistakes presented in their tutorials are copied and pasted by those who want to learn OpenGL. This page has been created so that newcomers understand GL programming a little better instead of working by trial and error.
There are also other articles explaining common mistakes:
you can get when using OpenGL
Mistakes related to measuring
when using deprecated functionality.
One of the possible mistakes related to this is to check for the presence of an , but instead using the corresponding core functions. The correct behavior is to check for the presence of the extension if you want to use the extension API, and check the GL version if you want to use the core API. In case of a , you should check for both the version and the prese if either is there, you can use the functionality.
In an object-oriented language like C++, it is often useful to have a class that wraps an OpenGL object. For example, one might have a texture object that has a constructor and a destructor like the following:
MyTexture::MyTexture(const char *pfilePath)
if(LoadFile(pfilePath)==ERROR)
textureID=0;
glGenTextures(1, &textureID);
//More GL code...
MyTexture::~MyTexture()
if(textureID)
glDeleteTextures(1, &textureID);
There is a large pitfall with doing this. OpenGL functions do not work unless an OpenGL context has been created and is active within that thread. Thus, glGenTextures will do nothing before context creation, and glDeleteTextures will do nothing after context destruction. The latter problem is not a significant concern since OpenGL contexts clean up after themselves, but the former is a problem.
This problem usually manifests itself when someone creates a texture object at global scope. There are several potential solutions:
Do not use constructors/destructors to initialize/destroy OpenGL objects. Instead, use member functions of these classes for these purposes. This violates RAII principles, so this is not the best course of action.
Have your OpenGL object constructors throw an exception if a context has not been created yet. This requires an addition to your context creation functionality that tells your code when a context has been created and is active.
Create a class that owns all other OpenGL related objects. This class should also be responsible for creating the context in its constructor.
There's another issue when using OpenGL with languages like c++. Consider the following function:
void MyTexture::TexParameter(GLenum pname, GLint param)
glBindTexture(GL_TEXTURE_2D, textureID);
glTexParameteri(GL_TEXTURE_2D, pname, param);
The problem is that the binding of the texture is hidden from the user of the class. There may be performance implications for doing repeated binding of objects (especially since the API may not seem heavyweight to the outside user). But the major co the bound objects are global state, which a local member function now has changed.
This can cause many sources of hidden breakage. The safe way to implement this is as follows:
void MyTexture::TexParameter(GLenum pname, GLint param)
GLuint boundTexture = 0;
glGetIntegerv(GL_TEXTURE_BINDING_2D, (GLint*) &boundTexture);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexParameteri(GL_TEXTURE_2D, pname, param);
glBindTexture(GL_TEXTURE_2D, boundTexture);
Note that this solution emphasizes correctness over performance; the
call may not be particularly fast.
A more effective solution is to use , which requires OpenGL 4.5 or , or the older
extension:
void MyTexture::TexParameter(GLenum pname, GLint param)
glTextureParameteri(textureID, GL_TEXTURE_2D, pname, param);
You create
(or ). If your program crashes during the upload, or diagonal lines appear in the resulting image, this is because the . This typically happens to users loading an image that is of the RGB or BGR format (for example, 24 BPP images), depending on the source of your image data.
Example, your image width = 401 and height = 500. The
what matters is the width. If we do the math, 401 pixels x 3 bytes = 1203, which is not divisible by 4. Some image file formats may inherently align each row to 4 bytes, but some do not. For those that don't, each row will start exactly 1203 bytes from the start of the last. OpenGL's row alignment can be changed to fit the row alignment for your image data. This is done by calling , where # is the alignment you want. The default alignment is 4.
And if you are interested, most GPUs like chunks of 4 bytes. In other words, GL_RGBA or GL_BGRA is preferred when each component is a byte. GL_RGB and GL_BGR is considered bizarre since most GPUs, most CPUs and any other kind of chip don't handle 24 bits. This means, the driver converts your GL_RGB or GL_BGR to what the GPU prefers, which typically is BGRA.
Similarly, if you read a buffer with , you might get similar problems. There is a GL_PACK_ALIGNMENT just like the GL_UNPACK_ALIGNMENT. The default alignment is again 4 which means each horizontal line must be a multiple of 4 in size. If you read the buffer with a format such as GL_BGRA or GL_RGBA you won't have any problems since the line will always be a multiple of 4. If you read it in a format such as GL_BGR or GL_RGB then you risk running into this problem.
The GL_PACK/UNPACK_ALIGNMENTs can only be 1, 2, 4, or 8. So an alignment of 3 is not allowed.
You can (but it is not advisable to do so) call
and you set X to 1, 2, 3, or 4. The X refers to the number of components (GL_RED would be 1, GL_RG would be 2, GL_RGB would be 3, GL_RGBA would be 4).
It is preferred to actually give a real , one with a specific internal precision. If the OpenGL implementation does not support the particular format and precision you choose, the driver will internally convert it into something it does support.
OpenGL versions 3.x and above have a set of
that all conforming implementations must implement.
Note: The creation of
actively forbids the use of unsized image formats. Or integers as above.
We should also state that it is common to see the following on tutorial websites:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, pixels);
Although GL will accept GL_RGB, it is up to the driver to decide an appropriate precision. We recommend that you be specific and write GL_RGB8:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, pixels);
This means you want the driver to actually store it in the R8G8B8 format. We should also state that most GPUs will internally convert GL_RGB8 into GL_RGBA8. So it's probably best to steer clear of GL_RGB8. We should also state that on some platforms, such as Windows, GL_BGRA for the
is preferred.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
This uses GL_RGBA8 for the internal format. GL_BGRA and GL_UNSIGNED_BYTE (or GL_UNSIGNED_INT_8_8_8_8 is for the data in pixels array. The driver will likely not have to perform any CPU-based conversion and DMA this data directly to the video card. Benchmarking shows that on Windows and with nVidia and ATI/AMD, that this is the optimal format.
Preferred pixel transfer formats and types can be .
When you select a pixelformat for your window, and you ask for a , the depth buffer is typically stored as a
with a bitdepth of 16, 24, or 32 bits.
Note: You can create images with . But these can only be used with , not the .
In OpenGL, all depth values lie in the range [0, 1]. The integer normalization process simply converts this floating-point range into integer values of the appropriate precision. It is the integer value that is stored in the depth buffer.
Typically, 24-bit depth buffers will pad each depth value out to 32-bits, so 8-bits per pixel will go unused. However, if you ask for an 8-bit
along with the depth buffer, the two separate images will generally be combined into a single . 24-bits will be used for depth, and the remaining 8-bits for stencil.
Now that the misconception about depth buffers being floating point is resolved, what is wrong with this call?
glReadPixels(0, 0, width, height, GL_DEPTH_COMPONENT, GL_FLOAT, mypixels);
Because the depth format is a normalized integer format, the driver will have to use the CPU to convert the normalized integer data into floating-point values. This is slow.
The preferred way to handle this is with this code:
if(depth_buffer_precision == 16)
GLushort mypixels[width*height];
glReadPixels(0, 0, width, height, GL_DEPTH_COMPONENT, GL_UNSIGNED_SHORT, mypixels);
else if(depth_buffer_precision == 24)
GLuint mypixels[width*height];
//There is no 24 bit variable, so we'll have to settle for 32 bit
glReadPixels(0, 0, width, height, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT_24_8, mypixels);
//No upconversion.
else if(depth_buffer_precision == 32)
GLuint mypixels[width*height];
glReadPixels(0, 0, width, height, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, mypixels);
If you have a depth/stencil format, you can get the depth/stencil data this way:
GLuint mypixels[width*height];
glReadPixels(0, 0, width, height, GL_DEPTH_STENCIL, GL_UNSIGNED_INT_24_8, mypixels);
What's wrong with this code?
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
The texture won't work because it is incomplete. The default GL_TEXTURE_MIN_FILTER state is GL_NEAREST_MIPMAP_LINEAR. And because OpenGL defines the default GL_TEXTURE_MAX_LEVEL to be 1000, OpenGL will expect there to be mipmap levels defined. Since you have only defined a single mipmap level, OpenGL will consider the texture incomplete until the GL_TEXTURE_MAX_LEVEL is properly set, or the GL_TEXTURE_MIN_FILTER parameter is set to not use mipmaps.
Better code would be to use
(if you have OpenGL 4.2 or ) to allocate the texture's storage, then upload with :
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexStorage2D(GL_TEXTURE_2D, 1, GL_RGBA8, width, height);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
This creates a texture with a single mipmap level, and sets all of the parameters appropriately. If you wanted to have multiple mipmaps, then you should change the 1 to the number of mipmaps you want. You will also need separate
calls to upload each mipmap.
If that is unavailable, you can get a similar effect from this code:
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
Again, if you use more than one mipmaps, you should change the GL_TEXTURE_MAX_LEVEL to state how many you will use (minus 1. The base/max level is a closed range), then perform a
(note the lack of "Sub") for each mipmap.
Mipmaps of a texture can be automatically generated with the
function. OpenGL 3.0 or greater is required for this function (or the extension GL_ARB_framebuffer_object). The functio when you call it for a texture, mipmaps are generated for that texture:
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexStorage2D(GL_TEXTURE_2D, num_mipmaps, GL_RGBA8, width, height);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
glGenerateMipmap(GL_TEXTURE_2D);
//Generate num_mipmaps number of mipmaps here.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
If texture storage is not available, you can use the older API:
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
glGenerateMipmap(GL_TEXTURE_2D);
//Generate mipmaps now!!!
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
Warning: It has been reported that on some ATI drivers, glGenerateMipmap(GL_TEXTURE_2D) has no effect unless you precede it with a call to glEnable(GL_TEXTURE_2D) in this particular case. Once again, to be clear, bind the texture, glEnable, then glGenerateMipmap. This is a bug and has been in the ATI drivers for a while. Perhaps by the time you read this, it will have been corrected. (glGenerateMipmap doesn't work on ATI as of 2011)
Warning: This section describes
that have been removed from
(they are only
in OpenGL 3.0). It is recommended that you not use this functionality in your programs.
OpenGL 1.4 is required for support for automatic mipmap generation. GL_GENERATE_MIPMAP is part of the texture object state and it is a flag (GL_TRUE or GL_FALSE). If it is set to GL_TRUE, then whenever texture level 0 is updated, the mipmaps will all be regenerated.
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
In GL 3.0, GL_GENERATE_MIPMAP is deprecated, and in 3.1 and above, it was removed. So for those versions, you must use .
Never use this. Use either GL_GENERATE_MIPMAP (requires GL 1.4) or the
function (requires GL 3.0).
Why should you check for errors? Why you should call glGetError()?
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
//Requires GL 1.4. Removed from GL 3.1 and above.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
The code doesn't call glGetError(). If you were to call glGetError, it would return GL_INVALID_ENUM. If you were to place a glGetError call after each function call, you will notice that the error is raised at . The magnification filter can't specif only the minification filter can do that.
Always check for .
It's best to set the wrap mode to GL_CLAMP_TO_EDGE and not the other formats. Don't forget to define all 6 faces else the texture is considered incomplete. Don't forget to setup GL_TEXTURE_WRAP_R because cubemaps require 3D texture coordinates.
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_CUBE_MAP, textureID);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAX_LEVEL, 0);
//Define all 6 faces
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face0);
glTexImage2D(GL_TEXTURE_CUBE_MAP_NEGATIVE_X, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face1);
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_Y, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face2);
glTexImage2D(GL_TEXTURE_CUBE_MAP_NEGATIVE_Y, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face3);
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_Z, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face4);
glTexImage2D(GL_TEXTURE_CUBE_MAP_NEGATIVE_Z, 0, GL_RGBA8, width, height, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels_face5);
If you want to auto-generate mipmaps, you can use any of the aforementioned mechanisms. OpenGL will not blend over multiple textures when generating mipmaps for the cubemap leaving visible seams at lower mip levels. Unless you enable .
Warning: This section describes
that have been removed from
(they are only
in OpenGL 3.0). It is recommended that you not use this functionality in your programs.
Never use GL_CLAMP; what you intended was GL_CLAMP_TO_EDGE. Indeed, GL_CLAMP was removed from core GL 3.1+, so it's not even an option anymore.
Note: If you are curious as to what GL_CLAMP used to mean, it referred to blending texture edge texels with border texels. This is different from GL_CLAMP_TO_BORDER, where the clamping happens to a solid border color. The GL_CLAMP behavior was tied to special border texels. Effectively, each texture had a 1-pixel border. This was useful for having more easily seamless texturing, but it was never implemented in hardware directly. So it was removed.
To change texels in an already existing 2d texture, use :
glBindTexture(GL_TEXTURE_2D, textureID);
//A texture you have already created storage for
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
creates the storage for the texture, defining the size/format and removing all previous pixel data.
only modifies pixel data within the texture. It can be used to update all the texels, or simply a portion of them.
To copy texels from the framebuffer, use .
glBindTexture(GL_TEXTURE_2D, textureID); //A texture you have already created storage for glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 0, 0, width, height); //Copy current read buffer to texture
Note that there is a
function, which does the copy to fill the image, but also defines the image size, format and so forth, just like .
To render directly to a texture, without doing a copy as above, use .
Warning: NVIDIA's OpenGL driver has a known issue with using incomplete textures. If the texture is not texture complete, the FBO itself will be considered GL_FRAMEBUFFER_UNSUPPORTED, or will have GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT. This is a driver bug, as the OpenGL specification does not allow implementations to return either of these values simply because a texture is not yet complete. Until this is resolved in NVIDIA's drivers, it is advised to make sure that all textures have mipmap levels, and that all
values are properly set up for the format of the texture. For example, integral textures are not complete if the mag and min filters have any LINEAR fields.
First, check to see if the
is active. Make sure that glEnable has been called and an appropriate
is active. Also make sure that the
matches the depth function.
Assuming all of that has been set up correctly, your framebuffer may not have a depth buffer at all. This is easy to see for a
you created. For the , this depends entirely on how you created your .
For example, if you are using GLUT, you need to make sure you pass GLUT_DEPTH to the glutInitDisplayMode function.
If you are doing
and you need a destination alpha, you need to make sure that your render target has one. This is easy to ensure when rendering to a . But with a , it depends on how you created your .
For example, if you are using GLUT, you need to make sure you pass GLUT_ALPHA to the glutInitDisplayMode function.
if you are rendering to the front buffer of the . It is better to have a double buffered window but if you have a case where you want to render to the window directly, then go ahead.
There are a lot of tutorial website that suggest you do this:
glFlush();
SwapBuffers();
This is unnecessary. The SwapBuffer command takes care of flushing and command processing.
functions deal with .
In many cases, explicit synchronization like this is unnecessary. The use of
can make it necessary, as can the use of .
As such, you should only use
when you are doing something that the specification specifically states will not be synchronous.
Warning: This section describes
that have been removed from
(they are only
in OpenGL 3.0). It is recommended that you not use this functionality in your programs.
For good performance, use a format that is directly supported by the GPU. Use a format that causes the driver to basically to a memcpy to the GPU. Most graphics cards support GL_BGRA. Example:
glDrawPixels(width, height, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
However, it is recommened that you use a texture instead and just update the texture with , possibly with .
Warning: This section describes
that have been removed from
(they are only
in OpenGL 3.0). It is recommended that you not use this functionality in your programs.
glLoadMatrixd, glRotated and any other function that have to do with the double type. Most GPUs don't support GL_DOUBLE (double) so the driver will convert the data to GL_FLOAT (float) and send to the GPU. If you put GL_DOUBLE data in a VBO, the performance might even be much worst than immediate mode (immediate mode means glBegin, glVertex, glEnd). GL doesn't offer any better way to know what the GPU prefers.
To achieve good
performance, you need to use a pixel transfer format that the implementation can directly work with. Consider this:
(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
The problem is that the pixel transfer format GL_RGBA may not be directly supported for GL_RGBA8 formats. On certain platforms, the GPU prefers that red and blue be swapped (GL_BGRA).
If you supply GL_RGBA, then the driver may have to do the swapping for you which is slow. If you do use GL_BGRA, the call to pixel transfer will be much faster.
Keep in mind that for the 3rd parameter, it must be kept as GL_RGBA8. This defines the texture's ; the last three parameters describe how your . The image format doesn't define the order stored by the texture, so the GPU is still allowed to store it internally as BGRA.
Note that GL_BGRA pixel transfer format is only preferred when uploading to GL_RGBA8 images. When dealing with other formats, like GL_RGBA16, GL_RGBA8UI or even GL_RGBA8_SNORM, then the regular GL_RGBA ordering may be preferred.
On which platforms is GL_BGRA preferred? Making a list would be too long but one example is Microsoft Windows. Note that with GL 4.3 or , you can simply ask the implementation what is the preferred format with .
A modern OpenGL program should always use double buffering. A modern 3D OpenGL program should also have a depth buffer.
Render sequence should be like this:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
RenderScene();
SwapBuffers(hdc);
//For Windows
The buffers should always be cleared. On much older hardware, there was a technique to get away without clearing the scene, but on even semi-recent hardware, this will actually make things slower. So always do the clear.
If your windows is covered or if it is partially covered or if window is outside the desktop area, the GPU might not render to those portions. Reading from those areas may likewise produce garbage data.
This is because those pixels fail the "". Only pixels that pass this test have valid data. Those that fail have undefined contents.
If this is a problem for you (note: it's only a problem if you need to read data back from the covered areas), the solution is to render to a
and render to that. If you need to display the image, you can blit to the .
Warning: This section describes
that have been removed from
(they are only
in OpenGL 3.0). It is recommended that you not use this functionality in your programs.
A modern OpenGL program should not use the selection buffer or feedback mode. These are not 3D graphics rendering features yet they have been added to GL since version 1.0. Selection and feedback runs in software (CPU side). On some implementations, when used along with VBOs, it has been reported that performance is lousy.
A modern OpenGL program should do color picking (render each object with some unique color and glReadPixels to find out what object your mouse was on) or do the picking with some 3rd party mathematics library.
Warning: This section describes
that have been removed from
(they are only
in OpenGL 3.0). It is recommended that you not use this functionality in your programs.
Users notice that on some implementation points or lines are rendered a little different then on others. This is because the GL spec allows some flexibility. Consider this:
glPointSize(5.0);
glHint(GL_POINT_SMOOTH_HINT, GL_NICEST);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
glEnable(GL_POINT_SMOOTH);
RenderMyPoints();
On some hardware, the points will on others, they will look like squares.
On some implementations, when you call glEnable(GL_POINT_SMOOTH) or glEnable(GL_LINE_SMOOTH) and you use shaders at the same time, your rendering speed goes down to 0.1 FPS. This is because the driver does software rendering. This would happen on AMD/ATI GPUs/drivers.
This is not a recommended method for anti-aliasing. Use
Warning: This section describes
that have been removed from
(they are only
in OpenGL 3.0). It is recommended that you not use this functionality in your programs.
Section 3.6.2 of the GL specification talks about the imaging subset. glColorTable and related operations are part of this subset. They are typically not supported by common GPUs and are software emulated. It is recommended that you avoid it.
If you find that your texture memory consumption is too high, use . If you really want to use paletted color indexed textures, you can implement this yourself a texture and a .
Some OpenGL enumerators represent bits in a particular bitfield. All of these end in _BIT (before any extension suffix). Take a look at this example:
glEnable(GL_BLEND | GL_DRAW_BUFFER); // invalid
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT); // valid
The first line is wrong. Because neither of these enumerators ends in _BIT, they are not bitfields and thus should not be OR'd together.
By contrast, the second line is perfectly fine. All of these end in _BIT, so this makes sense.
You cannot control whether a driver does triple buffering. You could try to implement it yourself using a . But if the driver is already doing triple buffering, your code will only turn it into quadruple buffering. Which is usually overkill.
Support for the
extension has been dropped by the major GL vendors. If you really need paletted textures on new hardware, you may use shaders to achieve that effect.
Shader example:
//Fragment shader
#version 110
uniform sampler2D ColorTable;
//256 x 1 pixels
uniform sampler2D MyIndexTexture;
varying vec2 TexCoord0;
void main()
//What color do we want to index?
vec4 myindex = texture2D(MyIndexTexture, TexCoord0);
//Do a dependency texture read
vec4 texel = texture2D(ColorTable, myindex.xy);
gl_FragColor = texel;
//Output the color
ColorTable might be in a format of your choice such as GL_RGBA8. ColorTable could be a texture of 256 x 1 pixels in size.
MyIndexTexture can be in any format, though GL_R8 is quite appropriate (GL_R8 is available in GL 3.0). MyIndexTexture could be of any dimension such as 64 x 32.
We read MyIndexTexture and we use this result as a texcoord to read ColorTable. If you wish to perform palette animation, or simply update the colors in the color table, you can submit new values to ColorTable with . Assuming that the color table is in GL_RGBA format:
glBindTexture(GL_TEXTURE_2D, myColorTableID);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 256, 1, GL_BGRA, GL_UNSIGNED_BYTE, mypixels);
Warning: This section describes
that have been removed from
(they are only
in OpenGL 3.0). It is recommended that you not use this functionality in your programs.
When multitexturing was introduced, getting the number of texture units was introduced as well which you can get with:
int MaxTextureUnits;
glGetIntegerv(GL_MAX_TEXTURE_UNITS, &MaxTextureUnits);
You should not use the above because it will give a low number on modern GPUs.
In old OpenGL, each texture unit has its own texture environment state (glTexEnv), texture matrix, texture coordinate generation (glTexGen), texcoords (glTexCoord), clamp mode, mipmap mode, texture LOD, anisotropy.
Then came the programmable GPU. There aren't texture units anymore. Today, you have texture image units (TIU) which you can get with:
int MaxTextureImageUnits;
glGetIntegerv(GL_MAX_TEXTURE_IMAGE_UNITS, &MaxTextureImageUnits);
A TIU just stores the
state, like the clamping, mipmaps, etc. They are independent of texture coordinates. You can use whatever texture coordinate to sample whatever TIU.
Note that each shader stage has its own max texture image unit count. GL_MAX_TEXTURE_IMAGE_UNITS returns the count for
only. Each shader has its own maximum number of texture image units. The number of image units across all shader stages is queried with GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS; this is the limit of the number of textures that can be bound at any one time. And this is the limit on the image unit to be passed to functions like
For most modern hardware, the image unit count will be at least 8 for most stages. Vertex shaders used to be limited to 4 textures on older hardware. All 3.x-capable hardware will return at least 16 for each stage.
In summary, shader-based GL 2.0 and above programs should use GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS only. The number of texture coordinates should use generic vertex attributes instead.
In some cases, you might want to disable depth testing and still allow the depth buffer updated while you are rendering your objects. It turns out that if you disable depth testing (), GL also disables writes to the depth buffer. The correct solution is to tell GL to ignore the depth test results with . Be careful because in this state, if you render a far away object last, the depth buffer will contain the values of that far object.
You find that these functions are slow.
That's normal. Any function of the glGet form will likely be slow. nVidia and ATI/AMD recommend that you avoid them. The GL driver (and also the GPU) prefer to receive information in the up direction. You can avoid all glGet calls if you track the information yourself.
Almost everything in OpenGL uses a coordinate system, such that when X goes right, Y goes up. This includes pixel transfer functions and texture coordinates.
For example,
takes the x and y position. The y-axis is considered from the bottom being 0 and the top being some value. This may seem counter intuitive to some who are used to their OS having the y-axis being inverted (your window's y axis is top to bottom and your mouse's coordinates are y axis top to bottom). The solution is obvious for the mouse: windowHeight - mouseY.
For textures, GL considers the y-axis to be bottom to top, the bottom being 0.0 and the top being 1.0. Some people load their bitmap to GL texture and wonder why it appears inverted on their model. The solution is simple: invert your bitmap or invert your model's texcoord by doing 1.0 - v.
It seems as if some people create a texture in their render function. Don't create resources in your render function. That goes for all the other glGen function calls as well. Don't read model files and create VBOs with them in your render function. Try to allocate resources at the beginning of your program. Release those resources when your program terminates.
Worst yet, some create textures (or any other GL object) in their render function and never call . Every time their render function gets called, a new texture is created without releasing the old one!
Warning: This section describes
that have been removed from
(they are only
in OpenGL 3.0). It is recommended that you not use this functionality in your programs.
Some users use gluPerspective or glFrustum and pass it a znear value of 0.0. They quickly find that z-buffering doesn't work.
You can't have a znear value of 0.0 or less. If you were to use 0.0, the 3rd row, 4th column of the projection matrix will end up being 0.0. If you use a negative value, you would end up with wrong rendering results on screen.
Both znear and zfar need to be above 0.0. gluPerspective will not raise a GL error. glFrustum will generate a GL_INVALID_VALUE.
As for glOrtho, yes you can use negative values for znear and zfar.
explains how vertices are transformed.
We are going to give this example with GL 1.1 but the same principle applies if you are using
or any other feature from a future version of OpenGL.
What's wrong with this code?
GLfloat vertex[] = {0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0};
GLfloat normal[] = {0.0, 0.0, 1.0};
GLfloat color[] = {1.0, 0.7, 1.0, 1.0};
GLushort index[] = {0, 1, 2, 3};
glVertexPointer(3, GL_FLOAT, sizeof(GLfloat)*3, vertex);
glNormalPointer(GL_FLOAT, sizeof(GLfloat)*3, normal);
glColorPointer(4, GL_FLOAT, sizeof(GLfloat)*4, color);
glDrawElements(GL_QUADS, 4, GL_UNSIGNED_SHORT, index);
The intent is to render a single quad, but your array sizes don't match up. You have only 1 normal for your quad while GL wants 1 normal per vertex. You have one RGBA color for your quad while GL wants one color per vertex. You risk crashing your system because the GL driver will be reading from beyond the size of your supplied normal and color array.
This issue is also explained in the .}

我要回帖

更多关于 生化危机5不能玩 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信