mesa/src/compiler/glsl
Kenneth Graunke fb857b5eea glsl: Don't constant propagate arrays.
Constant propagation on arrays doesn't make a lot of sense.  If the
array is only accessed with constant indexes, then opt_array_splitting
would split it up.  Otherwise, we have variable indexing.  If there's
multiple accesses, then constant propagation would end up replicating
the data.

The lower_const_arrays_to_uniforms pass creates uniforms for each
ir_constant with array type that it encounters.  This means that it
creates redundant uniforms for each copy of the constant, which means
uploading too much data.  It can even mean exceeding the maximum number
of uniform components, causing link failures.

We could try and teach the pass to de-duplicate the data by hashing
constants, but it makes more sense to avoid duplicating it in the first
place.  We should promote constant arrays to uniforms, then propagate
the uniform access.

Fixes the TressFX shaders from Tomb Raider, which exceeded the maximum
number of uniform components by a huge margin and failed to link.

On Broadwell:

total instructions in shared programs: 9067702 -> 9068202 (0.01%)
instructions in affected programs: 10335 -> 10835 (4.84%)
helped: 10 (Hoard, Shadow of Mordor, Amnesia: The Dark Descent)
HURT: 20 (Natural Selection 2)

loops in affected programs: 4 -> 0

The hurt programs appear to no longer have a constarray uniform, as
all constants were successfully propagated.  Apparently before this
patch, we successfully unrolled a loop containing array access, but
only after promoting constant arrays to uniforms.  With this patch,
we unroll it first, so all array access is direct, and the array
is split up, and individual constants are propagated.  This seems
better.

Cc: mesa-stable@lists.freedesktop.org
Reported-by: Karol Herbst <nouveau@karolherbst.de>
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Timothy Arceri <timothy.arceri@collabora.com>
2016-06-23 11:58:50 -07:00
..
glcpp mesa: hook up core bits of GL_ARB_shader_group_vote 2016-06-06 20:48:46 -04:00
tests glsl: add unit tests data vertex/expected outcome for uninitialized warning 2016-05-26 09:19:36 +02:00
.gitignore compiler: fix .gitignore for glsl_compiler 2016-02-03 13:32:46 -05:00
ast.h glsl: geom shader max_vertices layout must match. 2016-06-06 18:02:19 +10:00
ast_array_index.cpp glsl: make max array trackers ints and use -1 as base. (v2) 2016-05-24 11:27:29 +10:00
ast_expr.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
ast_function.cpp glsl/ast: don't crash when func_name is NULL 2016-06-06 12:54:30 +10:00
ast_to_hir.cpp glsl: Always strip arrayness in precision_qualifier_allowed 2016-06-16 09:33:53 -07:00
ast_type.cpp glsl: geom shader max_vertices layout must match. 2016-06-06 18:02:19 +10:00
blob.c glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
blob.h glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
builtin_functions.cpp mesa/glsl: stop using GL shader type internally 2016-06-16 10:45:35 +10:00
builtin_types.cpp glsl: make *sampler2DMSArray available in ESSL 3.20 2016-04-03 18:06:52 -04:00
builtin_variables.cpp glsl: Optionally lower TCS gl_PatchVerticesIn to a uniform. 2016-06-15 12:47:37 -07:00
glsl_lexer.ll glsl: resource is a reserved keyword in GLSL 4.20 as well 2016-05-04 06:44:45 +10:00
glsl_parser.yy glsl/parser: handle multiple layout sections with AST nodes. 2016-05-23 16:20:01 +10:00
glsl_parser_extras.cpp glsl: initialise pointer to NULL 2016-06-07 08:13:25 +02:00
glsl_parser_extras.h mesa: hook up core bits of GL_ARB_shader_group_vote 2016-06-06 20:48:46 -04:00
glsl_symbol_table.cpp glsl: use var with initializer on global var validation 2016-05-11 13:50:04 +02:00
glsl_symbol_table.h glsl: use var with initializer on global var validation 2016-05-11 13:50:04 +02:00
glsl_to_nir.cpp glsl/mesa: stop duplicating geom and tcs layout values 2016-06-23 11:01:46 +10:00
glsl_to_nir.h compiler: Move glsl_to_nir to libglsl.la 2016-05-26 14:13:38 -07:00
hir_field_selection.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
ir.cpp mesa: hook up core bits of GL_ARB_shader_group_vote 2016-06-06 20:48:46 -04:00
ir.h mesa: Fix incorrect "see also" comments 2016-06-20 11:18:39 -07:00
ir_basic_block.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
ir_basic_block.h glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
ir_builder.cpp glsl: replace remaining tabs in ir_builder.cpp 2016-03-03 11:25:57 +11:00
ir_builder.h glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
ir_clone.cpp glsl: make max array trackers ints and use -1 as base. (v2) 2016-05-24 11:27:29 +10:00
ir_constant_expression.cpp glsl/ir: remove TABs in ir_constant_expression.cpp 2016-06-10 10:30:18 +10:00
ir_equals.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
ir_expression_flattening.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
ir_expression_flattening.h glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
ir_function.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
ir_function_can_inline.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
ir_function_detect_recursion.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
ir_function_inlining.h glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
ir_hierarchical_visitor.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
ir_hierarchical_visitor.h glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
ir_hv_accept.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
ir_import_prototypes.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
ir_optimization.h glsl: Add an option to clamp block indices when lowering UBO/SSBOs 2016-05-23 19:12:34 -07:00
ir_print_visitor.cpp glsl: Print "precise" on ir_variable nodes. 2016-04-03 17:33:38 -07:00
ir_print_visitor.h glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
ir_reader.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
ir_reader.h glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
ir_rvalue_visitor.cpp glsl: removing double semi-colons 2016-04-26 14:36:29 -07:00
ir_rvalue_visitor.h glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
ir_set_program_inouts.cpp glsl/types: rename is_dual_slot_double to is_dual_slot_64bit. 2016-06-09 09:17:24 +10:00
ir_uniform.h mesa: remove initialized field from uniform storage 2016-03-29 09:59:03 +11:00
ir_validate.cpp mesa: hook up core bits of GL_ARB_shader_group_vote 2016-06-06 20:48:46 -04:00
ir_variable_refcount.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
ir_variable_refcount.h glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
ir_visitor.h glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
link_atomics.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
link_functions.cpp glsl: make max array trackers ints and use -1 as base. (v2) 2016-05-24 11:27:29 +10:00
link_interface_blocks.cpp glsl: fix error message on uniform block mismatch 2016-05-26 12:40:41 +10:00
link_uniform_block_active_visitor.cpp glsl: use enum glsl_interface_packing in more places. (v2) 2016-06-06 15:58:37 +10:00
link_uniform_block_active_visitor.h glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
link_uniform_blocks.cpp glsl: use enum glsl_interface_packing in more places. (v2) 2016-06-06 15:58:37 +10:00
link_uniform_initializers.cpp glsl: use new interfaces for 64-bit checks. 2016-06-09 07:37:19 +10:00
link_uniforms.cpp glsl: stop allocating memory for SSBOs and builtins 2016-06-08 13:19:32 +10:00
link_varyings.cpp mesa/glsl: stop using GL shader type internally 2016-06-16 10:45:35 +10:00
link_varyings.h glsl: fix max varyings count for ARB_enhanced_layouts 2016-06-12 21:56:28 +10:00
linker.cpp glsl: Propagate invariant/precise after lowering const arrays. 2016-06-23 11:58:50 -07:00
linker.h glsl: use enum glsl_interface_packing in more places. (v2) 2016-06-06 15:58:37 +10:00
list.h glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
loop_analysis.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
loop_analysis.h glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
loop_controls.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
loop_unroll.cpp glsl: Detect do-while-false loops and unroll them 2016-02-24 18:43:40 -08:00
lower_buffer_access.cpp glsl: use new interfaces for 64-bit checks. 2016-06-09 07:37:19 +10:00
lower_buffer_access.h glsl: use enum glsl_interface_packing in more places. (v2) 2016-06-06 15:58:37 +10:00
lower_const_arrays_to_uniforms.cpp glsl: Make lower_const_arrays_to_uniforms work directly on constants. 2016-06-23 11:58:50 -07:00
lower_discard.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
lower_discard_flow.cpp glsl: fix new gcc6 warnings 2016-02-18 17:10:55 -05:00
lower_distance.cpp glsl/distance: make sure we use clip dist varying slot for lowered var. 2016-06-02 07:09:21 +10:00
lower_if_to_cond_assign.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
lower_instructions.cpp glsl: Properly handle ldexp(0.0f, non-zero-exp). 2016-04-18 15:48:54 -07:00
lower_jumps.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
lower_mat_op_to_vec.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
lower_named_interface_blocks.cpp glsl: when lowering named interface set assigned flag 2016-03-31 12:52:22 +11:00
lower_noise.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
lower_offset_array.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
lower_output_reads.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
lower_packed_varyings.cpp glsl: use new interfaces for 64-bit checks. 2016-06-09 07:37:19 +10:00
lower_packing_builtins.cpp glsl: Remove 2x16 half-precision pack/unpack opcodes. 2016-02-01 10:43:57 -08:00
lower_shared_reference.cpp glsl: use enum glsl_interface_packing in more places. (v2) 2016-06-06 15:58:37 +10:00
lower_subroutine.cpp subroutines: handle explicit indexes properly 2016-05-23 16:19:57 +10:00
lower_tess_level.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
lower_texture_projection.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
lower_ubo_reference.cpp glsl: use enum glsl_interface_packing in more places. (v2) 2016-06-06 15:58:37 +10:00
lower_variable_index_to_cond_assign.cpp glsl: Lower variable indexing of system value arrays unconditionally. 2016-04-04 14:29:21 -07:00
lower_vec_index_to_cond_assign.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
lower_vec_index_to_swizzle.cpp glsl: Convert lower_vec_index_to_swizzle to a rvalue visitor. 2016-04-29 16:03:29 -07:00
lower_vector.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
lower_vector_derefs.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
lower_vector_insert.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
lower_vertex_id.cpp glsl: Make vertex ID lowering declare gl_BaseVertex as hidden. 2016-04-01 21:58:22 -07:00
main.cpp glsl: add just-log option for the standalone compiler. 2016-05-26 08:46:05 +02:00
opt_algebraic.cpp glsl/opt_algebraic: Don't handle invariant or precise trees 2016-03-23 16:28:07 -07:00
opt_array_splitting.cpp glsl: Split arrays even in the presence of whole-array copies. 2016-06-23 11:58:50 -07:00
opt_conditional_discard.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
opt_constant_folding.cpp glsl: Don't do constant propagation in opt_constant_folding. 2016-05-15 23:59:39 -07:00
opt_constant_propagation.cpp glsl: Don't constant propagate arrays. 2016-06-23 11:58:50 -07:00
opt_constant_variable.cpp glsl: Make opt_constant_variable() bail in useless cases. 2016-05-15 23:59:05 -07:00
opt_copy_propagation.cpp glsl: Make opt_copy_propagation actually propagate into loops. 2016-06-06 14:14:31 -07:00
opt_copy_propagation_elements.cpp glsl: Make opt_copy_propagation_elements actually propagate into loops. 2016-06-06 14:14:31 -07:00
opt_dead_builtin_variables.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
opt_dead_builtin_varyings.cpp glsl: only match gl_FragData and not gl_SecondaryFragDataEXT 2016-06-21 21:58:34 -04:00
opt_dead_code.cpp glsl: use enum glsl_interface_packing in more places. (v2) 2016-06-06 15:58:37 +10:00
opt_dead_code_local.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
opt_dead_functions.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
opt_flatten_nested_if_blocks.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
opt_flip_matrices.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
opt_function_inlining.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
opt_if_simplification.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
opt_minmax.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
opt_noop_swizzle.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
opt_rebalance_tree.cpp glsl/rebalance_tree: Don't handle invariant or precise trees 2016-03-23 16:28:07 -07:00
opt_redundant_jumps.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
opt_structure_splitting.cpp glsl: Use correct mode for split components. 2016-05-24 09:55:38 +10:00
opt_swizzle_swizzle.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
opt_tree_grafting.cpp glsl: Don't copy propagate or tree graft precise values. 2016-04-12 15:57:48 -07:00
opt_vectorize.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
program.h glsl: add transform feedback buffers to resource list 2016-03-31 12:52:57 +11:00
propagate_invariance.cpp glsl: Add a pass to propagate the "invariant" and "precise" qualifiers 2016-03-23 16:28:06 -07:00
README glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
s_expression.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
s_expression.h glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
standalone.cpp mesa/glsl: stop using GL shader type internally 2016-06-16 10:45:35 +10:00
standalone.h glsl: add just-log option for the standalone compiler. 2016-05-26 08:46:05 +02:00
standalone_scaffolding.cpp mesa/glsl: stop using GL shader type internally 2016-06-16 10:45:35 +10:00
standalone_scaffolding.h mesa/glsl: stop using GL shader type internally 2016-06-16 10:45:35 +10:00
test.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
test_optpass.cpp glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
test_optpass.h glsl: move to compiler/ 2016-01-26 16:08:33 +00:00
TODO glsl: move to compiler/ 2016-01-26 16:08:33 +00:00

Welcome to Mesa's GLSL compiler.  A brief overview of how things flow:

1) lex and yacc-based preprocessor takes the incoming shader string
and produces a new string containing the preprocessed shader.  This
takes care of things like #if, #ifdef, #define, and preprocessor macro
invocations.  Note that #version, #extension, and some others are
passed straight through.  See glcpp/*

2) lex and yacc-based parser takes the preprocessed string and
generates the AST (abstract syntax tree).  Almost no checking is
performed in this stage.  See glsl_lexer.ll and glsl_parser.yy.

3) The AST is converted to "HIR".  This is the intermediate
representation of the compiler.  Constructors are generated, function
calls are resolved to particular function signatures, and all the
semantic checking is performed.  See ast_*.cpp for the conversion, and
ir.h for the IR structures.

4) The driver (Mesa, or main.cpp for the standalone binary) performs
optimizations.  These include copy propagation, dead code elimination,
constant folding, and others.  Generally the driver will call
optimizations in a loop, as each may open up opportunities for other
optimizations to do additional work.  See most files called ir_*.cpp

5) linking is performed.  This does checking to ensure that the
outputs of the vertex shader match the inputs of the fragment shader,
and assigns locations to uniforms, attributes, and varyings.  See
linker.cpp.

6) The driver may perform additional optimization at this point, as
for example dead code elimination previously couldn't remove functions
or global variable usage when we didn't know what other code would be
linked in.

7) The driver performs code generation out of the IR, taking a linked
shader program and producing a compiled program for each stage.  See
../mesa/program/ir_to_mesa.cpp for Mesa IR code generation.

FAQ:

Q: What is HIR versus IR versus LIR?

A: The idea behind the naming was that ast_to_hir would produce a
high-level IR ("HIR"), with things like matrix operations, structure
assignments, etc., present.  A series of lowering passes would occur
that do things like break matrix multiplication into a series of dot
products/MADs, make structure assignment be a series of assignment of
components, flatten if statements into conditional moves, and such,
producing a low level IR ("LIR").

However, it now appears that each driver will have different
requirements from a LIR.  A 915-generation chipset wants all functions
inlined, all loops unrolled, all ifs flattened, no variable array
accesses, and matrix multiplication broken down.  The Mesa IR backend
for swrast would like matrices and structure assignment broken down,
but it can support function calls and dynamic branching.  A 965 vertex
shader IR backend could potentially even handle some matrix operations
without breaking them down, but the 965 fragment shader IR backend
would want to break to have (almost) all operations down channel-wise
and perform optimization on that.  As a result, there's no single
low-level IR that will make everyone happy.  So that usage has fallen
out of favor, and each driver will perform a series of lowering passes
to take the HIR down to whatever restrictions it wants to impose
before doing codegen.

Q: How is the IR structured?

A: The best way to get started seeing it would be to run the
standalone compiler against a shader:

./glsl_compiler --dump-lir \
	~/src/piglit/tests/shaders/glsl-orangebook-ch06-bump.frag

So for example one of the ir_instructions in main() contains:

(assign (constant bool (1)) (var_ref litColor)  (expression vec3 * (var_ref Surf
aceColor) (var_ref __retval) ) )

Or more visually:
                     (assign)
                 /       |        \
        (var_ref)  (expression *)  (constant bool 1)
         /          /           \
(litColor)      (var_ref)    (var_ref)
                  /                  \
           (SurfaceColor)          (__retval)

which came from:

litColor = SurfaceColor * max(dot(normDelta, LightDir), 0.0);

(the max call is not represented in this expression tree, as it was a
function call that got inlined but not brought into this expression
tree)

Each of those nodes is a subclass of ir_instruction.  A particular
ir_instruction instance may only appear once in the whole IR tree with
the exception of ir_variables, which appear once as variable
declarations:

(declare () vec3 normDelta)

and multiple times as the targets of variable dereferences:
...
(assign (constant bool (1)) (var_ref __retval) (expression float dot
 (var_ref normDelta) (var_ref LightDir) ) )
...
(assign (constant bool (1)) (var_ref __retval) (expression vec3 -
 (var_ref LightDir) (expression vec3 * (constant float (2.000000))
 (expression vec3 * (expression float dot (var_ref normDelta) (var_ref
 LightDir) ) (var_ref normDelta) ) ) ) )
...

Each node has a type.  Expressions may involve several different types:
(declare (uniform ) mat4 gl_ModelViewMatrix)
((assign (constant bool (1)) (var_ref constructor_tmp) (expression
 vec4 * (var_ref gl_ModelViewMatrix) (var_ref gl_Vertex) ) )

An expression tree can be arbitrarily deep, and the compiler tries to
keep them structured like that so that things like algebraic
optimizations ((color * 1.0 == color) and ((mat1 * mat2) * vec == mat1
* (mat2 * vec))) or recognizing operation patterns for code generation
(vec1 * vec2 + vec3 == mad(vec1, vec2, vec3)) are easier.  This comes
at the expense of additional trickery in implementing some
optimizations like CSE where one must navigate an expression tree.

Q: Why no SSA representation?

A: Converting an IR tree to SSA form makes dead code elimination,
common subexpression elimination, and many other optimizations much
easier.  However, in our primarily vector-based language, there's some
major questions as to how it would work.  Do we do SSA on the scalar
or vector level?  If we do it at the vector level, we're going to end
up with many different versions of the variable when encountering code
like:

(assign (constant bool (1)) (swiz x (var_ref __retval) ) (var_ref a) )
(assign (constant bool (1)) (swiz y (var_ref __retval) ) (var_ref b) )
(assign (constant bool (1)) (swiz z (var_ref __retval) ) (var_ref c) )

If every masked update of a component relies on the previous value of
the variable, then we're probably going to be quite limited in our
dead code elimination wins, and recognizing common expressions may
just not happen.  On the other hand, if we operate channel-wise, then
we'll be prone to optimizing the operation on one of the channels at
the expense of making its instruction flow different from the other
channels, and a vector-based GPU would end up with worse code than if
we didn't optimize operations on that channel!

Once again, it appears that our optimization requirements are driven
significantly by the target architecture.  For now, targeting the Mesa
IR backend, SSA does not appear to be that important to producing
excellent code, but we do expect to do some SSA-based optimizations
for the 965 fragment shader backend when that is developed.

Q: How should I expand instructions that take multiple backend instructions?

Sometimes you'll have to do the expansion in your code generation --
see, for example, ir_to_mesa.cpp's handling of ir_unop_sqrt.  However,
in many cases you'll want to do a pass over the IR to convert
non-native instructions to a series of native instructions.  For
example, for the Mesa backend we have ir_div_to_mul_rcp.cpp because
Mesa IR (and many hardware backends) only have a reciprocal
instruction, not a divide.  Implementing non-native instructions this
way gives the chance for constant folding to occur, so (a / 2.0)
becomes (a * 0.5) after codegen instead of (a * (1.0 / 2.0))

Q: How shoud I handle my special hardware instructions with respect to IR?

Our current theory is that if multiple targets have an instruction for
some operation, then we should probably be able to represent that in
the IR.  Generally this is in the form of an ir_{bin,un}op expression
type.  For example, we initially implemented fract() using (a -
floor(a)), but both 945 and 965 have instructions to give that result,
and it would also simplify the implementation of mod(), so
ir_unop_fract was added.  The following areas need updating to add a
new expression type:

ir.h (new enum)
ir.cpp:operator_strs (used for ir_reader)
ir_constant_expression.cpp (you probably want to be able to constant fold)
ir_validate.cpp (check users have the right types)

You may also need to update the backends if they will see the new expr type:

../mesa/program/ir_to_mesa.cpp

You can then use the new expression from builtins (if all backends
would rather see it), or scan the IR and convert to use your new
expression type (see ir_mod_to_floor, for example).

Q: How is memory management handled in the compiler?

The hierarchical memory allocator "talloc" developed for the Samba
project is used, so that things like optimization passes don't have to
worry about their garbage collection so much.  It has a few nice
features, including low performance overhead and good debugging
support that's trivially available.

Generally, each stage of the compile creates a talloc context and
allocates its memory out of that or children of it.  At the end of the
stage, the pieces still live are stolen to a new context and the old
one freed, or the whole context is kept for use by the next stage.

For IR transformations, a temporary context is used, then at the end
of all transformations, reparent_ir reparents all live nodes under the
shader's IR list, and the old context full of dead nodes is freed.
When developing a single IR transformation pass, this means that you
want to allocate instruction nodes out of the temporary context, so if
it becomes dead it doesn't live on as the child of a live node.  At
the moment, optimization passes aren't passed that temporary context,
so they find it by calling talloc_parent() on a nearby IR node.  The
talloc_parent() call is expensive, so many passes will cache the
result of the first talloc_parent().  Cleaning up all the optimization
passes to take a context argument and not call talloc_parent() is left
as an exercise.

Q: What is the file naming convention in this directory?

Initially, there really wasn't one.  We have since adopted one:

 - Files that implement code lowering passes should be named lower_*
   (e.g., lower_noise.cpp).
 - Files that implement optimization passes should be named opt_*.
 - Files that implement a class that is used throught the code should
   take the name of that class (e.g., ir_hierarchical_visitor.cpp).
 - Files that contain code not fitting in one of the previous
   categories should have a sensible name (e.g., glsl_parser.yy).