How to control the frames per second rendered by OpenGL driver?

1

For a simple scene, on windows with Intel graphics we are showing 500fps. Of course that does not mean we can actually show that many given a normal screen refresh rate of 60 to 130, but there is clearly headroom. On my laptop running Ubuntu with a decent NVIDIA card, the same code shows about 2100 fps and the animation is insanely fast. But on an Ubuntu desktop with an RTX2080 card, it's running just under 30 fps. We're pretty sure that the code isn't taking that long on the card, the driver is somehow throttling. Is there such a thing? I would expect to desktop to get anywhere from 3 to 5 times as much performance as my laptop.

The test in question had loaded a .3ds model of a frog and a couple of textured cubes and is transforming them and rendering repeatedly.

I am attaching the device driver information and will write the code being used with glfw in the main render loop, but a new key piece of information: even if I comment out all rendering, the speed is 30 frames per second, so @Botje you might be right, but I don't request anything, would it be different defaults on different machines?

sudo lhw -c video
*-display                        description: VGA compatible controller
       product: TU104 [GeForce RTX 2080 Rev. A]
       vendor: NVIDIA Corporation
       physical id: 0
       bus info: pci@0000:01:00.0
       version: a1
       width: 64 bits
       clock: 33MHz
       capabilities: pm msi pciexpress vga_controller bus_master cap_list rom
       configuration: driver=nvidia latency=0
       resources: irq:130 memory:a3000000-a3ffffff memory:90000000-9fffffff memory:a0000000-a1ffffff ioport:3000(size=128) memory:c0000-dffff

ok, it is using NVIDIA, but is the bus transfer really only 33 Mhz? That seems incredibly slow.
 dmesg | grep nvidia
[    5.713378] nvidia: loading out-of-tree module taints kernel.
[    5.713382] nvidia: module license 'NVIDIA' taints kernel.
[    5.717639] nvidia: module verification failed: signature and/or required key missing - tainting kernel
[    5.722567] nvidia-nvlink: Nvlink Core is being initialized, major device number 237
[    5.724195] nvidia 0000:01:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=none:owns=io+mem
[    5.859187] nvidia-modeset: Loading NVIDIA Kernel Mode Setting Driver for UNIX platforms  440.59  Thu Jan 30 00:59:18 UTC 2020
[    5.861979] [drm] [nvidia-drm] [GPU ID 0x00000100] Loading driver
[    5.861980] [drm] Initialized nvidia-drm 0.0.0 20160202 for 0000:01:00.0 on minor 0
[    5.881141] nvidia-uvm: Loaded the UVM driver, major device number 235.

The following minimum working example does not render anything and STILL draws at only 30fps, but it is crashing on the first openGL call.I cannot see what I am doing differently, but I cut out a lot of code and doubtless something was important.

#include <iostream>
#include <glad/glad.h>
#include <GLFW/glfw3.h>
using namespace std;

constexpr int width = 1024, height = 1024;
int main() {
  glfwInit();
  glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
  glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
  glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
  GLFWwindow* win = glfwCreateWindow(width, height, "test", nullptr, nullptr);
  if (win == nullptr) {
    glfwTerminate();
    throw "Failed to open GLFW window";
  }
  glfwMakeContextCurrent(win);
  if (!gladLoadGLLoader((GLADloadproc)glfwGetProcAddress))
    throw "Failed to initialize GLAD";
  glEnable(GL_BLEND);
  glEnable(GL_LINE_SMOOTH);
  glEnable(GL_TEXTURE);
  glDepthFunc(GL_NEVER);
  glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
//  glw::mat4 projection = glm::ortho(0, width, height, 0);
  float startTime = glfwGetTime();
  float renderTime = 0;
  int frameCount = 0;
  while (!glfwWindowShouldClose(win)) {
    glfwPollEvents();  // Check and call events
    float startRender = glfwGetTime();
    glClearColor(1.0f, 1.0f, 1.0f, 1.0f);  // Clear the colorbuffer and depth
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    //render(); literally draw nothing
    glfwSwapBuffers(win);  // Swap buffer so the scene shows on screen
    renderTime += glfwGetTime() - startRender;
    if (frameCount >= 150) {
      double endTime = glfwGetTime();
      double elapsed = endTime - startTime;
      cerr << "Elapsed=" << elapsed << " FPS= " << frameCount / elapsed
           << " render=" << renderTime << '\n';
      frameCount = 0;
      renderTime = 0;
      startTime = endTime;
    } else {
      frameCount++;
    }
  }
  glfwDestroyWindow(win);
  glfwTerminate();
}

compiled with:

g++ -g testwindow.cc glad.o -lglfw -lGL -ldl

glxinfo | grep render outputs:

direct rendering: Yes
OpenGL renderer string: GeForce RTX 2080/PCIe/SSE2
    GL_ARB_conditional_render_inverted, GL_ARB_conservative_depth, 
    GL_NVX_conditional_render, GL_NVX_gpu_memory_info, GL_NVX_nvenc_interop, 
    GL_NV_compute_shader_derivatives, GL_NV_conditional_render, 
    GL_NV_path_rendering, GL_NV_path_rendering_shared_edge, 
    GL_NV_stereo_view_rendering, GL_NV_texgen_reflection, 
    GL_ARB_compute_variable_group_size, GL_ARB_conditional_render_inverted, 
    GL_NVX_conditional_render, GL_NVX_gpu_memory_info, GL_NVX_nvenc_interop, 
    GL_NV_compute_shader_derivatives, GL_NV_conditional_render, 
    GL_NV_path_rendering, GL_NV_path_rendering_shared_edge, 
    GL_NV_stereo_view_rendering, GL_NV_texgen_reflection, 
    GL_EXT_multisample_compatibility, GL_EXT_multisampled_render_to_texture, 
    GL_EXT_multisampled_render_to_texture2, 
    GL_EXT_raster_multisample, GL_EXT_render_snorm, GL_EXT_robustness, 
    GL_NV_conditional_render, GL_NV_conservative_raster, 
    GL_NV_packed_float_linear, GL_NV_path_rendering, 
    GL_NV_path_rendering_shared_edge, GL_NV_pixel_buffer_object, 
    GL_NV_shadow_samplers_cube, GL_NV_stereo_view_rendering, 
    GL_OES_element_index_uint, GL_OES_fbo_render_mipmap, 
    GL_OVR_multiview_multisampled_render_to_texture
opengl
driver
throttling
asked on Stack Overflow Jun 20, 2020 by Dov • edited Jun 21, 2020 by Dov

0 Answers

Nobody has answered this question yet.


User contributions licensed under CC BY-SA 3.0