OpenGL GLSL : Intel vs NVIDIA

0

My laptop has an integrated graphics card (Intel HD Graphics 620) and a dedicated one (NVIDIA GeForce 940MX).

I have a program object consisting of three shaders (i.e., vertex, geometry and fragment shaders), which works correctly on the Intel chip, but when I try to run the program on the NVIDIA card I get the following error during the link step.

ProgramInfoLog : "frag.albedo" not declared as an output from the previous stage

This is the code of the three shaders.

vertex shader

#version 450 core

layout (location = 0) in vec3 center;
layout (location = 1) in float radius;
layout (location = 2) in vec3 albedo;
layout (location = 3) in vec2 number;

out VertData
{
  vec3 center;
  float radius;
  vec3 albedo;
  vec2 number;
  vec3 colour;
} vert;

void main()
{
  float AtomNumber = number[0];
  uvec3 mask = uvec3(0x00FF0000, 0x0000FF00, 0x000000FF);
  uvec3 shift = uvec3(16, 8, 0);

  vec3 colour = ((uvec3(AtomNumber) & mask) >> shift) / 255.0f;

  vert.center = center;
  vert.radius = radius;
  vert.albedo = albedo;
  vert.number = number;
  vert.colour = colour;
}

geometry shader

#version 450 core

layout (points) in;
layout (triangle_strip, max_vertices = 4) out;

in VertData
{
  vec3 center;
  float radius;
  vec3 albedo;
  vec2 number;
  vec3 colour;
} vert[];

uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;

out FragData
{
  flat vec3 center;
  flat float radius;
  flat vec3 albedo;
  flat vec2 number;
  flat vec3 colour;
  smooth vec2 offset;
} frag;

void main()
{
  vec2 offsets[4];
  offsets[0] = vec2(-1.0f, -1.0f);
  offsets[1] = vec2(-1.0f, +1.0f);
  offsets[2] = vec2(+1.0f, -1.0f);
  offsets[3] = vec2(+1.0f, +1.0f);

  float BoxCorrection = 1.5f;

  for (int i = 0; i < 4; i++)
  {
    gl_Position = view * model * vec4(vert[0].center, 1.0f);

    frag.center = vec3(gl_Position);
    frag.radius = vert[0].radius;
    frag.albedo = vert[0].albedo;
    frag.number = vert[0].number;
    frag.colour = vert[0].colour;
    frag.offset = offsets[i] * BoxCorrection;

    gl_Position.xy += (frag.offset * frag.radius);
    gl_Position = projection * gl_Position;

    EmitVertex();
  }

  EndPrimitive();
}

fragment shader

#version 450

layout (location = 0) out vec3 center;
layout (location = 1) out vec3 normal;
layout (location = 2) out vec3 albedo;
layout (location = 3) out vec3 colour;

struct ImpostorData
{
  vec3 center;
  vec3 normal;
};

void SetImpostorData(out ImpostorData impostor);

in FragData
{
  flat vec3 center;
  flat float radius;
  flat vec3 albedo;
  flat vec2 number;
  flat vec3 colour;
  smooth vec2 offset;
} frag;

uniform mat4 projection;

void main()
{
  ImpostorData impostor;
  SetImpostorData(impostor);

  vec4 clip = projection * vec4(impostor.center, 1.0f);
  float depth = clip.z / clip.w;
  gl_FragDepth = ((gl_DepthRange.diff * depth) + gl_DepthRange.near + gl_DepthRange.far) * 0.5f;

  center = impostor.center;
  normal = impostor.normal;
  albedo = frag.albedo;
  colour = frag.colour;
}

void SetImpostorData(out ImpostorData impostor)
{
  // ...
}

As you can see, the interface blocks between shaders fit together.

I wonder if the NVIDIA card requires all shader stages and then if it needs to add a tessellation shader between vertex and geometry shaders.

I tried to change the line of code in the fragment shader where I use frag.albedo in this way:

// albedo = frag.albedo;
albedo = vec3(1.0f, 1.0f, 0.0f);

and the error message has moved to frag.center now, which is used in SetImpostorData():

ProgramInfoLog : "frag.center" not declared as an output from the previous stage

So I suppose there is something wrong between geometry and fragment shaders.

Thanks!

opengl
graphics
glsl
shader
nvidia
asked on Stack Overflow Feb 9, 2019 by Arctic Pi • edited Feb 9, 2019 by Arctic Pi

0 Answers

Nobody has answered this question yet.


User contributions licensed under CC BY-SA 3.0