OpenGL Render To Texture 10x Slower than DirectX

That’s my main problem right now. It takes roughly 10 – 20 times longer when I call mSceneManager->manualRender() when using OpenGL in Ogre instead of DirectX (9).

After getting my app running in DirectX, I used NVidia Perfhud to debug the framerate spikes – only to find that DirectX is vastly outperforming OpenGL! So now I can’t really afford gDEBugger ($800), so I’m stuck with inserting my mini-profiler into the GL_RenderSystem code to figure out what parts are slowing down etc. That will be a good experience and help me get more familiar with Ogre RenderSystem calls.

Misc Notes:
– Make sure you play around with your RenderSystem configuration settings. I had set my Display Frequency to 59 and that caused the framerate to be cut in half. Dunno why I had it set to that value.
– With OpenGL you don’t need the RenderSystem to issue _begin() and _end() frame calls, but you do in DirectX.
Fixed a bug where when writing dynamic textures I was using pointer math and it was messing up my textures in DirectX. I switched to using array notation and it fixed the issue. Haven’t spent much time figuring out why I was having so much grief with that (it worked fine with the OpenGL RenderSystem).

Next, I plan on profiling the GL_RenderSystem to figure out those RTT issues, and upgrading to the newly released Ogre version 1.6.3!!