2014-08-22 10:54:43 -07:00
|
|
|
/*
|
|
|
|
|
* Copyright © 2014 Intel Corporation
|
|
|
|
|
*
|
|
|
|
|
* Permission is hereby granted, free of charge, to any person obtaining a
|
|
|
|
|
* copy of this software and associated documentation files (the "Software"),
|
|
|
|
|
* to deal in the Software without restriction, including without limitation
|
|
|
|
|
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
|
|
|
|
|
* and/or sell copies of the Software, and to permit persons to whom the
|
|
|
|
|
* Software is furnished to do so, subject to the following conditions:
|
|
|
|
|
*
|
|
|
|
|
* The above copyright notice and this permission notice (including the next
|
|
|
|
|
* paragraph) shall be included in all copies or substantial portions of the
|
|
|
|
|
* Software.
|
|
|
|
|
*
|
|
|
|
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
|
|
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
|
|
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
|
|
|
|
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
|
|
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
|
|
|
|
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
|
|
|
|
|
* IN THE SOFTWARE.
|
|
|
|
|
*/
|
|
|
|
|
|
|
|
|
|
#include "brw_fs.h"
|
|
|
|
|
#include "brw_cfg.h"
|
2015-11-22 18:27:42 -08:00
|
|
|
#include "brw_eu.h"
|
2014-08-22 10:54:43 -07:00
|
|
|
|
|
|
|
|
/** @file brw_fs_cmod_propagation.cpp
|
|
|
|
|
*
|
|
|
|
|
* Implements a pass that propagates the conditional modifier from a CMP x 0.0
|
|
|
|
|
* instruction into the instruction that generated x. For instance, in this
|
|
|
|
|
* sequence
|
|
|
|
|
*
|
|
|
|
|
* add(8) g70<1>F g69<8,8,1>F 4096F
|
|
|
|
|
* cmp.ge.f0(8) null g70<8,8,1>F 0F
|
|
|
|
|
*
|
|
|
|
|
* we can do the comparison as part of the ADD instruction directly:
|
|
|
|
|
*
|
|
|
|
|
* add.ge.f0(8) g70<1>F g69<8,8,1>F 4096F
|
2015-01-03 12:18:15 -08:00
|
|
|
*
|
|
|
|
|
* If there had been a use of the flag register and another CMP using g70
|
|
|
|
|
*
|
|
|
|
|
* add.ge.f0(8) g70<1>F g69<8,8,1>F 4096F
|
|
|
|
|
* (+f0) sel(8) g71<F> g72<8,8,1>F g73<8,8,1>F
|
|
|
|
|
* cmp.ge.f0(8) null g70<8,8,1>F 0F
|
|
|
|
|
*
|
|
|
|
|
* we can recognize that the CMP is generating the flag value that already
|
|
|
|
|
* exists and therefore remove the instruction.
|
2014-08-22 10:54:43 -07:00
|
|
|
*/
|
|
|
|
|
|
2018-06-13 10:04:55 -07:00
|
|
|
static bool
|
|
|
|
|
cmod_propagate_cmp_to_add(const gen_device_info *devinfo, bblock_t *block,
|
intel/compiler: split is_partial_write() into two variants
This function is used in two different scenarios that for 32-bit
instructions are the same, but for 16-bit instructions are not.
One scenario is that in which we are working at a SIMD8 register
level and we need to know if a register is fully defined or written.
This is useful, for example, in the context of liveness analysis or
register allocation, where we work with units of registers.
The other scenario is that in which we want to know if an instruction
is writing a full scalar component or just some subset of it. This is
useful, for example, in the context of some optimization passes
like copy propagation.
For 32-bit instructions (or larger), a SIMD8 dispatch will always write
at least a full SIMD8 register (32B) if the write is not partial. The
function is_partial_write() checks this to determine if we have a partial
write. However, when we deal with 16-bit instructions, that logic disables
some optimizations that should be safe. For example, a SIMD8 16-bit MOV will
only update half of a SIMD register, but it is still a complete write of the
variable for a SIMD8 dispatch, so we should not prevent copy propagation in
this scenario because we don't write all 32 bytes in the SIMD register
or because the write starts at offset 16B (wehere we pack components Y or
W of 16-bit vectors).
This is a problem for SIMD8 executions (VS, TCS, TES, GS) of 16-bit
instructions, which lose a number of optimizations because of this, most
important of which is copy-propagation.
This patch splits is_partial_write() into is_partial_reg_write(), which
represents the current is_partial_write(), useful for things like
liveness analysis, and is_partial_var_write(), which considers
the dispatch size to check if we are writing a full variable (rather
than a full register) to decide if the write is partial or not, which
is what we really want in many optimization passes.
Then the patch goes on and rewrites all uses of is_partial_write() to use
one or the other version. Specifically, we use is_partial_var_write()
in the following places: copy propagation, cmod propagation, common
subexpression elimination, saturate propagation and sel peephole.
Notice that the semantics of is_partial_var_write() exactly match the
current implementation of is_partial_write() for anything that is
32-bit or larger, so no changes are expected for 32-bit instructions.
Tested against ~5000 tests involving 16-bit instructions in CTS produced
the following changes in instruction counts:
Patched | Master | % |
================================================
SIMD8 | 621,900 | 706,721 | -12.00% |
================================================
SIMD16 | 93,252 | 93,252 | 0.00% |
================================================
As expected, the change only affects SIMD8 dispatches.
Reviewed-by: Topi Pohjolainen <topi.pohjolainen@intel.com>
2018-07-10 09:52:46 +02:00
|
|
|
fs_inst *inst, unsigned dispatch_width)
|
2018-06-13 10:04:55 -07:00
|
|
|
{
|
|
|
|
|
bool read_flag = false;
|
|
|
|
|
|
|
|
|
|
foreach_inst_in_block_reverse_starting_from(fs_inst, scan_inst, inst) {
|
2018-06-13 10:11:31 -07:00
|
|
|
if (scan_inst->opcode == BRW_OPCODE_ADD &&
|
intel/compiler: split is_partial_write() into two variants
This function is used in two different scenarios that for 32-bit
instructions are the same, but for 16-bit instructions are not.
One scenario is that in which we are working at a SIMD8 register
level and we need to know if a register is fully defined or written.
This is useful, for example, in the context of liveness analysis or
register allocation, where we work with units of registers.
The other scenario is that in which we want to know if an instruction
is writing a full scalar component or just some subset of it. This is
useful, for example, in the context of some optimization passes
like copy propagation.
For 32-bit instructions (or larger), a SIMD8 dispatch will always write
at least a full SIMD8 register (32B) if the write is not partial. The
function is_partial_write() checks this to determine if we have a partial
write. However, when we deal with 16-bit instructions, that logic disables
some optimizations that should be safe. For example, a SIMD8 16-bit MOV will
only update half of a SIMD register, but it is still a complete write of the
variable for a SIMD8 dispatch, so we should not prevent copy propagation in
this scenario because we don't write all 32 bytes in the SIMD register
or because the write starts at offset 16B (wehere we pack components Y or
W of 16-bit vectors).
This is a problem for SIMD8 executions (VS, TCS, TES, GS) of 16-bit
instructions, which lose a number of optimizations because of this, most
important of which is copy-propagation.
This patch splits is_partial_write() into is_partial_reg_write(), which
represents the current is_partial_write(), useful for things like
liveness analysis, and is_partial_var_write(), which considers
the dispatch size to check if we are writing a full variable (rather
than a full register) to decide if the write is partial or not, which
is what we really want in many optimization passes.
Then the patch goes on and rewrites all uses of is_partial_write() to use
one or the other version. Specifically, we use is_partial_var_write()
in the following places: copy propagation, cmod propagation, common
subexpression elimination, saturate propagation and sel peephole.
Notice that the semantics of is_partial_var_write() exactly match the
current implementation of is_partial_write() for anything that is
32-bit or larger, so no changes are expected for 32-bit instructions.
Tested against ~5000 tests involving 16-bit instructions in CTS produced
the following changes in instruction counts:
Patched | Master | % |
================================================
SIMD8 | 621,900 | 706,721 | -12.00% |
================================================
SIMD16 | 93,252 | 93,252 | 0.00% |
================================================
As expected, the change only affects SIMD8 dispatches.
Reviewed-by: Topi Pohjolainen <topi.pohjolainen@intel.com>
2018-07-10 09:52:46 +02:00
|
|
|
!scan_inst->is_partial_var_write(dispatch_width) &&
|
2018-06-13 10:11:31 -07:00
|
|
|
scan_inst->exec_size == inst->exec_size) {
|
2018-06-13 10:04:55 -07:00
|
|
|
bool negate;
|
|
|
|
|
|
|
|
|
|
/* A CMP is basically a subtraction. The result of the
|
|
|
|
|
* subtraction must be the same as the result of the addition.
|
|
|
|
|
* This means that one of the operands must be negated. So (a +
|
|
|
|
|
* b) vs (a == -b) or (a + -b) vs (a == b).
|
|
|
|
|
*/
|
|
|
|
|
if ((inst->src[0].equals(scan_inst->src[0]) &&
|
|
|
|
|
inst->src[1].negative_equals(scan_inst->src[1])) ||
|
|
|
|
|
(inst->src[0].equals(scan_inst->src[1]) &&
|
|
|
|
|
inst->src[1].negative_equals(scan_inst->src[0]))) {
|
|
|
|
|
negate = false;
|
|
|
|
|
} else if ((inst->src[0].negative_equals(scan_inst->src[0]) &&
|
|
|
|
|
inst->src[1].equals(scan_inst->src[1])) ||
|
|
|
|
|
(inst->src[0].negative_equals(scan_inst->src[1]) &&
|
|
|
|
|
inst->src[1].equals(scan_inst->src[0]))) {
|
|
|
|
|
negate = true;
|
|
|
|
|
} else {
|
|
|
|
|
goto not_match;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/* From the Sky Lake PRM Vol. 7 "Assigning Conditional Mods":
|
|
|
|
|
*
|
|
|
|
|
* * Note that the [post condition signal] bits generated at
|
|
|
|
|
* the output of a compute are before the .sat.
|
|
|
|
|
*
|
|
|
|
|
* So we don't have to bail if scan_inst has saturate.
|
|
|
|
|
*/
|
|
|
|
|
/* Otherwise, try propagating the conditional. */
|
|
|
|
|
const enum brw_conditional_mod cond =
|
|
|
|
|
negate ? brw_swap_cmod(inst->conditional_mod)
|
|
|
|
|
: inst->conditional_mod;
|
|
|
|
|
|
|
|
|
|
if (scan_inst->can_do_cmod() &&
|
|
|
|
|
((!read_flag && scan_inst->conditional_mod == BRW_CONDITIONAL_NONE) ||
|
|
|
|
|
scan_inst->conditional_mod == cond)) {
|
|
|
|
|
scan_inst->conditional_mod = cond;
|
|
|
|
|
inst->remove(block);
|
|
|
|
|
return true;
|
|
|
|
|
}
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
not_match:
|
|
|
|
|
if (scan_inst->flags_written())
|
|
|
|
|
break;
|
|
|
|
|
|
|
|
|
|
read_flag = read_flag || scan_inst->flags_read(devinfo);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return false;
|
|
|
|
|
}
|
|
|
|
|
|
2018-06-13 10:36:42 -07:00
|
|
|
/**
|
|
|
|
|
* Propagate conditional modifiers from NOT instructions
|
|
|
|
|
*
|
|
|
|
|
* Attempt to convert sequences like
|
|
|
|
|
*
|
|
|
|
|
* or(8) g78<8,8,1> g76<8,8,1>UD g77<8,8,1>UD
|
|
|
|
|
* ...
|
|
|
|
|
* not.nz.f0(8) null g78<8,8,1>UD
|
|
|
|
|
*
|
|
|
|
|
* into
|
|
|
|
|
*
|
|
|
|
|
* or.z.f0(8) g78<8,8,1> g76<8,8,1>UD g77<8,8,1>UD
|
|
|
|
|
*/
|
|
|
|
|
static bool
|
|
|
|
|
cmod_propagate_not(const gen_device_info *devinfo, bblock_t *block,
|
intel/compiler: split is_partial_write() into two variants
This function is used in two different scenarios that for 32-bit
instructions are the same, but for 16-bit instructions are not.
One scenario is that in which we are working at a SIMD8 register
level and we need to know if a register is fully defined or written.
This is useful, for example, in the context of liveness analysis or
register allocation, where we work with units of registers.
The other scenario is that in which we want to know if an instruction
is writing a full scalar component or just some subset of it. This is
useful, for example, in the context of some optimization passes
like copy propagation.
For 32-bit instructions (or larger), a SIMD8 dispatch will always write
at least a full SIMD8 register (32B) if the write is not partial. The
function is_partial_write() checks this to determine if we have a partial
write. However, when we deal with 16-bit instructions, that logic disables
some optimizations that should be safe. For example, a SIMD8 16-bit MOV will
only update half of a SIMD register, but it is still a complete write of the
variable for a SIMD8 dispatch, so we should not prevent copy propagation in
this scenario because we don't write all 32 bytes in the SIMD register
or because the write starts at offset 16B (wehere we pack components Y or
W of 16-bit vectors).
This is a problem for SIMD8 executions (VS, TCS, TES, GS) of 16-bit
instructions, which lose a number of optimizations because of this, most
important of which is copy-propagation.
This patch splits is_partial_write() into is_partial_reg_write(), which
represents the current is_partial_write(), useful for things like
liveness analysis, and is_partial_var_write(), which considers
the dispatch size to check if we are writing a full variable (rather
than a full register) to decide if the write is partial or not, which
is what we really want in many optimization passes.
Then the patch goes on and rewrites all uses of is_partial_write() to use
one or the other version. Specifically, we use is_partial_var_write()
in the following places: copy propagation, cmod propagation, common
subexpression elimination, saturate propagation and sel peephole.
Notice that the semantics of is_partial_var_write() exactly match the
current implementation of is_partial_write() for anything that is
32-bit or larger, so no changes are expected for 32-bit instructions.
Tested against ~5000 tests involving 16-bit instructions in CTS produced
the following changes in instruction counts:
Patched | Master | % |
================================================
SIMD8 | 621,900 | 706,721 | -12.00% |
================================================
SIMD16 | 93,252 | 93,252 | 0.00% |
================================================
As expected, the change only affects SIMD8 dispatches.
Reviewed-by: Topi Pohjolainen <topi.pohjolainen@intel.com>
2018-07-10 09:52:46 +02:00
|
|
|
fs_inst *inst, unsigned dispatch_width)
|
2018-06-13 10:36:42 -07:00
|
|
|
{
|
|
|
|
|
const enum brw_conditional_mod cond = brw_negate_cmod(inst->conditional_mod);
|
|
|
|
|
bool read_flag = false;
|
|
|
|
|
|
|
|
|
|
if (cond != BRW_CONDITIONAL_Z && cond != BRW_CONDITIONAL_NZ)
|
|
|
|
|
return false;
|
|
|
|
|
|
|
|
|
|
foreach_inst_in_block_reverse_starting_from(fs_inst, scan_inst, inst) {
|
|
|
|
|
if (regions_overlap(scan_inst->dst, scan_inst->size_written,
|
|
|
|
|
inst->src[0], inst->size_read(0))) {
|
|
|
|
|
if (scan_inst->opcode != BRW_OPCODE_OR &&
|
|
|
|
|
scan_inst->opcode != BRW_OPCODE_AND)
|
|
|
|
|
break;
|
|
|
|
|
|
intel/compiler: split is_partial_write() into two variants
This function is used in two different scenarios that for 32-bit
instructions are the same, but for 16-bit instructions are not.
One scenario is that in which we are working at a SIMD8 register
level and we need to know if a register is fully defined or written.
This is useful, for example, in the context of liveness analysis or
register allocation, where we work with units of registers.
The other scenario is that in which we want to know if an instruction
is writing a full scalar component or just some subset of it. This is
useful, for example, in the context of some optimization passes
like copy propagation.
For 32-bit instructions (or larger), a SIMD8 dispatch will always write
at least a full SIMD8 register (32B) if the write is not partial. The
function is_partial_write() checks this to determine if we have a partial
write. However, when we deal with 16-bit instructions, that logic disables
some optimizations that should be safe. For example, a SIMD8 16-bit MOV will
only update half of a SIMD register, but it is still a complete write of the
variable for a SIMD8 dispatch, so we should not prevent copy propagation in
this scenario because we don't write all 32 bytes in the SIMD register
or because the write starts at offset 16B (wehere we pack components Y or
W of 16-bit vectors).
This is a problem for SIMD8 executions (VS, TCS, TES, GS) of 16-bit
instructions, which lose a number of optimizations because of this, most
important of which is copy-propagation.
This patch splits is_partial_write() into is_partial_reg_write(), which
represents the current is_partial_write(), useful for things like
liveness analysis, and is_partial_var_write(), which considers
the dispatch size to check if we are writing a full variable (rather
than a full register) to decide if the write is partial or not, which
is what we really want in many optimization passes.
Then the patch goes on and rewrites all uses of is_partial_write() to use
one or the other version. Specifically, we use is_partial_var_write()
in the following places: copy propagation, cmod propagation, common
subexpression elimination, saturate propagation and sel peephole.
Notice that the semantics of is_partial_var_write() exactly match the
current implementation of is_partial_write() for anything that is
32-bit or larger, so no changes are expected for 32-bit instructions.
Tested against ~5000 tests involving 16-bit instructions in CTS produced
the following changes in instruction counts:
Patched | Master | % |
================================================
SIMD8 | 621,900 | 706,721 | -12.00% |
================================================
SIMD16 | 93,252 | 93,252 | 0.00% |
================================================
As expected, the change only affects SIMD8 dispatches.
Reviewed-by: Topi Pohjolainen <topi.pohjolainen@intel.com>
2018-07-10 09:52:46 +02:00
|
|
|
if (scan_inst->is_partial_var_write(dispatch_width) ||
|
2018-06-13 10:36:42 -07:00
|
|
|
scan_inst->dst.offset != inst->src[0].offset ||
|
|
|
|
|
scan_inst->exec_size != inst->exec_size)
|
|
|
|
|
break;
|
|
|
|
|
|
|
|
|
|
if (scan_inst->can_do_cmod() &&
|
|
|
|
|
((!read_flag && scan_inst->conditional_mod == BRW_CONDITIONAL_NONE) ||
|
|
|
|
|
scan_inst->conditional_mod == cond)) {
|
|
|
|
|
scan_inst->conditional_mod = cond;
|
|
|
|
|
inst->remove(block);
|
|
|
|
|
return true;
|
|
|
|
|
}
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if (scan_inst->flags_written())
|
|
|
|
|
break;
|
|
|
|
|
|
|
|
|
|
read_flag = read_flag || scan_inst->flags_read(devinfo);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return false;
|
|
|
|
|
}
|
|
|
|
|
|
2014-08-22 10:54:43 -07:00
|
|
|
static bool
|
intel/compiler: split is_partial_write() into two variants
This function is used in two different scenarios that for 32-bit
instructions are the same, but for 16-bit instructions are not.
One scenario is that in which we are working at a SIMD8 register
level and we need to know if a register is fully defined or written.
This is useful, for example, in the context of liveness analysis or
register allocation, where we work with units of registers.
The other scenario is that in which we want to know if an instruction
is writing a full scalar component or just some subset of it. This is
useful, for example, in the context of some optimization passes
like copy propagation.
For 32-bit instructions (or larger), a SIMD8 dispatch will always write
at least a full SIMD8 register (32B) if the write is not partial. The
function is_partial_write() checks this to determine if we have a partial
write. However, when we deal with 16-bit instructions, that logic disables
some optimizations that should be safe. For example, a SIMD8 16-bit MOV will
only update half of a SIMD register, but it is still a complete write of the
variable for a SIMD8 dispatch, so we should not prevent copy propagation in
this scenario because we don't write all 32 bytes in the SIMD register
or because the write starts at offset 16B (wehere we pack components Y or
W of 16-bit vectors).
This is a problem for SIMD8 executions (VS, TCS, TES, GS) of 16-bit
instructions, which lose a number of optimizations because of this, most
important of which is copy-propagation.
This patch splits is_partial_write() into is_partial_reg_write(), which
represents the current is_partial_write(), useful for things like
liveness analysis, and is_partial_var_write(), which considers
the dispatch size to check if we are writing a full variable (rather
than a full register) to decide if the write is partial or not, which
is what we really want in many optimization passes.
Then the patch goes on and rewrites all uses of is_partial_write() to use
one or the other version. Specifically, we use is_partial_var_write()
in the following places: copy propagation, cmod propagation, common
subexpression elimination, saturate propagation and sel peephole.
Notice that the semantics of is_partial_var_write() exactly match the
current implementation of is_partial_write() for anything that is
32-bit or larger, so no changes are expected for 32-bit instructions.
Tested against ~5000 tests involving 16-bit instructions in CTS produced
the following changes in instruction counts:
Patched | Master | % |
================================================
SIMD8 | 621,900 | 706,721 | -12.00% |
================================================
SIMD16 | 93,252 | 93,252 | 0.00% |
================================================
As expected, the change only affects SIMD8 dispatches.
Reviewed-by: Topi Pohjolainen <topi.pohjolainen@intel.com>
2018-07-10 09:52:46 +02:00
|
|
|
opt_cmod_propagation_local(const gen_device_info *devinfo,
|
|
|
|
|
bblock_t *block,
|
|
|
|
|
unsigned dispatch_width)
|
2014-08-22 10:54:43 -07:00
|
|
|
{
|
|
|
|
|
bool progress = false;
|
|
|
|
|
int ip = block->end_ip + 1;
|
|
|
|
|
|
|
|
|
|
foreach_inst_in_block_reverse_safe(fs_inst, inst, block) {
|
|
|
|
|
ip--;
|
|
|
|
|
|
i965/fs: Handle CMP.nz ... 0 and AND.nz ... 1 similarly in cmod propagation
Espically on platforms that do not natively generate 0u and ~0u for
Boolean results, we generate a lot of sequences where a CMP is
followed by an AND with 1. emit_bool_to_cond_code does this, for
example. On ILK, this results in a sequence like:
add(8) g3<1>F g8<8,8,1>F -g4<0,1,0>F
cmp.l.f0(8) g3<1>D g3<8,8,1>F 0F
and.nz.f0(8) null g3<8,8,1>D 1D
(+f0) iff(8) Jump: 6
The AND.nz is obviously redundant. By propagating the cmod, we can
instead generate
add.l.f0(8) null g8<8,8,1>F -g4<0,1,0>F
(+f0) iff(8) Jump: 6
Existing code already handles the propagation from the CMP to the ADD.
Shader-db results:
GM45 (0x2A42):
total instructions in shared programs: 3550829 -> 3550788 (-0.00%)
instructions in affected programs: 10028 -> 9987 (-0.41%)
helped: 24
Iron Lake (0x0046):
total instructions in shared programs: 4993146 -> 4993105 (-0.00%)
instructions in affected programs: 9675 -> 9634 (-0.42%)
helped: 24
Ivy Bridge (0x0166):
total instructions in shared programs: 6291870 -> 6291794 (-0.00%)
instructions in affected programs: 17914 -> 17838 (-0.42%)
helped: 48
Haswell (0x0426):
total instructions in shared programs: 5779256 -> 5779180 (-0.00%)
instructions in affected programs: 16694 -> 16618 (-0.46%)
helped: 48
Broadwell (0x162E):
total instructions in shared programs: 6823088 -> 6823014 (-0.00%)
instructions in affected programs: 15824 -> 15750 (-0.47%)
helped: 46
No chage on Sandy Bridge or on any platform when NIR is used.
v2: Add unit tests suggested by Matt. Remove spurious writes_flag()
check on scan_inst when scan_inst is known to be BRW_OPCODE_CMP (also
suggested by Matt).
v3: Fix some comments and remove some explicit int() casts in fs_reg
constructors in the unit tests. Both suggested by Matt.
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
2015-02-03 21:12:28 +02:00
|
|
|
if ((inst->opcode != BRW_OPCODE_AND &&
|
|
|
|
|
inst->opcode != BRW_OPCODE_CMP &&
|
2018-06-13 10:36:42 -07:00
|
|
|
inst->opcode != BRW_OPCODE_MOV &&
|
|
|
|
|
inst->opcode != BRW_OPCODE_NOT) ||
|
2014-08-22 10:54:43 -07:00
|
|
|
inst->predicate != BRW_PREDICATE_NONE ||
|
|
|
|
|
!inst->dst.is_null() ||
|
2018-03-14 10:19:19 -07:00
|
|
|
(inst->src[0].file != VGRF && inst->src[0].file != ATTR &&
|
i965/fs: Propagate conditional modifiers from compares to adds
The math inside the add and the cmp in this instruction sequence is the
same. We can utilize this to eliminate the compare.
add(8) g5<1>F g2<8,8,1>F g64.5<0,1,0>F { align1 1Q compacted };
cmp.z.f0(8) null<1>F g2<8,8,1>F -g64.5<0,1,0>F { align1 1Q switch };
(-f0) sel(8) g8<1>F (abs)g5<8,8,1>F 3e-37F { align1 1Q };
This is reduced to:
add.z.f0(8) g5<1>F g2<8,8,1>F g64.5<0,1,0>F { align1 1Q compacted };
(-f0) sel(8) g8<1>F (abs)g5<8,8,1>F 3e-37F { align1 1Q };
This optimization pass could do even better. The nature of converting
vectorized code from the GLSL front end to scalar code in NIR results in
sequences like:
add(8) g7<1>F g4<8,8,1>F g64.5<0,1,0>F { align1 1Q compacted };
add(8) g6<1>F g3<8,8,1>F g64.5<0,1,0>F { align1 1Q compacted };
add(8) g5<1>F g2<8,8,1>F g64.5<0,1,0>F { align1 1Q compacted };
cmp.z.f0(8) null<1>F g2<8,8,1>F -g64.5<0,1,0>F { align1 1Q switch };
(-f0) sel(8) g8<1>F (abs)g5<8,8,1>F 3e-37F { align1 1Q };
cmp.z.f0(8) null<1>F g3<8,8,1>F -g64.5<0,1,0>F { align1 1Q switch };
(-f0) sel(8) g10<1>F (abs)g6<8,8,1>F 3e-37F { align1 1Q };
cmp.z.f0(8) null<1>F g4<8,8,1>F -g64.5<0,1,0>F { align1 1Q switch };
(-f0) sel(8) g12<1>F (abs)g7<8,8,1>F 3e-37F { align1 1Q };
In this sequence, only the first cmp.z is removed. With different
scheduling, all 3 could get removed.
Skylake
total instructions in shared programs: 14407009 -> 14400173 (-0.05%)
instructions in affected programs: 1307274 -> 1300438 (-0.52%)
helped: 4880
HURT: 0
helped stats (abs) min: 1 max: 33 x̄: 1.40 x̃: 1
helped stats (rel) min: 0.03% max: 8.70% x̄: 0.70% x̃: 0.52%
95% mean confidence interval for instructions value: -1.45 -1.35
95% mean confidence interval for instructions %-change: -0.72% -0.69%
Instructions are helped.
total cycles in shared programs: 532943169 -> 532923528 (<.01%)
cycles in affected programs: 14065798 -> 14046157 (-0.14%)
helped: 2703
HURT: 339
helped stats (abs) min: 1 max: 1062 x̄: 12.27 x̃: 2
helped stats (rel) min: <.01% max: 28.72% x̄: 0.38% x̃: 0.21%
HURT stats (abs) min: 1 max: 739 x̄: 39.86 x̃: 12
HURT stats (rel) min: 0.02% max: 27.69% x̄: 1.38% x̃: 0.41%
95% mean confidence interval for cycles value: -8.66 -4.26
95% mean confidence interval for cycles %-change: -0.24% -0.14%
Cycles are helped.
LOST: 0
GAINED: 1
Broadwell
total instructions in shared programs: 14719636 -> 14712949 (-0.05%)
instructions in affected programs: 1288188 -> 1281501 (-0.52%)
helped: 4845
HURT: 0
helped stats (abs) min: 1 max: 33 x̄: 1.38 x̃: 1
helped stats (rel) min: 0.03% max: 8.00% x̄: 0.70% x̃: 0.52%
95% mean confidence interval for instructions value: -1.43 -1.33
95% mean confidence interval for instructions %-change: -0.72% -0.68%
Instructions are helped.
total cycles in shared programs: 559599253 -> 559581699 (<.01%)
cycles in affected programs: 13315565 -> 13298011 (-0.13%)
helped: 2600
HURT: 269
helped stats (abs) min: 1 max: 2128 x̄: 12.24 x̃: 2
helped stats (rel) min: <.01% max: 23.95% x̄: 0.41% x̃: 0.20%
HURT stats (abs) min: 1 max: 790 x̄: 53.07 x̃: 20
HURT stats (rel) min: 0.02% max: 15.96% x̄: 1.55% x̃: 0.75%
95% mean confidence interval for cycles value: -8.47 -3.77
95% mean confidence interval for cycles %-change: -0.27% -0.18%
Cycles are helped.
LOST: 0
GAINED: 8
Haswell
total instructions in shared programs: 12978609 -> 12973483 (-0.04%)
instructions in affected programs: 932921 -> 927795 (-0.55%)
helped: 3480
HURT: 0
helped stats (abs) min: 1 max: 33 x̄: 1.47 x̃: 1
helped stats (rel) min: 0.03% max: 7.84% x̄: 0.78% x̃: 0.58%
95% mean confidence interval for instructions value: -1.53 -1.42
95% mean confidence interval for instructions %-change: -0.80% -0.75%
Instructions are helped.
total cycles in shared programs: 410270788 -> 410250531 (<.01%)
cycles in affected programs: 10986161 -> 10965904 (-0.18%)
helped: 2087
HURT: 254
helped stats (abs) min: 1 max: 2672 x̄: 14.63 x̃: 4
helped stats (rel) min: <.01% max: 39.61% x̄: 0.42% x̃: 0.21%
HURT stats (abs) min: 1 max: 519 x̄: 40.49 x̃: 16
HURT stats (rel) min: 0.01% max: 12.83% x̄: 1.20% x̃: 0.47%
95% mean confidence interval for cycles value: -12.82 -4.49
95% mean confidence interval for cycles %-change: -0.31% -0.18%
Cycles are helped.
LOST: 0
GAINED: 5
Ivy Bridge
total instructions in shared programs: 11686082 -> 11681548 (-0.04%)
instructions in affected programs: 937696 -> 933162 (-0.48%)
helped: 3150
HURT: 0
helped stats (abs) min: 1 max: 33 x̄: 1.44 x̃: 1
helped stats (rel) min: 0.03% max: 7.84% x̄: 0.69% x̃: 0.49%
95% mean confidence interval for instructions value: -1.49 -1.38
95% mean confidence interval for instructions %-change: -0.71% -0.67%
Instructions are helped.
total cycles in shared programs: 257514962 -> 257492471 (<.01%)
cycles in affected programs: 11524149 -> 11501658 (-0.20%)
helped: 1970
HURT: 239
helped stats (abs) min: 1 max: 3525 x̄: 17.48 x̃: 3
helped stats (rel) min: <.01% max: 49.60% x̄: 0.46% x̃: 0.17%
HURT stats (abs) min: 1 max: 1358 x̄: 50.00 x̃: 15
HURT stats (rel) min: 0.02% max: 59.88% x̄: 1.84% x̃: 0.65%
95% mean confidence interval for cycles value: -17.01 -3.35
95% mean confidence interval for cycles %-change: -0.33% -0.08%
Cycles are helped.
LOST: 9
GAINED: 1
Sandy Bridge
total instructions in shared programs: 10432841 -> 10429893 (-0.03%)
instructions in affected programs: 685071 -> 682123 (-0.43%)
helped: 2453
HURT: 0
helped stats (abs) min: 1 max: 9 x̄: 1.20 x̃: 1
helped stats (rel) min: 0.02% max: 7.55% x̄: 0.64% x̃: 0.46%
95% mean confidence interval for instructions value: -1.23 -1.17
95% mean confidence interval for instructions %-change: -0.67% -0.62%
Instructions are helped.
total cycles in shared programs: 146133660 -> 146134195 (<.01%)
cycles in affected programs: 3991634 -> 3992169 (0.01%)
helped: 1237
HURT: 153
helped stats (abs) min: 1 max: 2853 x̄: 6.93 x̃: 2
helped stats (rel) min: <.01% max: 29.00% x̄: 0.24% x̃: 0.14%
HURT stats (abs) min: 1 max: 1740 x̄: 59.56 x̃: 12
HURT stats (rel) min: 0.03% max: 78.98% x̄: 1.96% x̃: 0.42%
95% mean confidence interval for cycles value: -5.13 5.90
95% mean confidence interval for cycles %-change: -0.17% 0.16%
Inconclusive result (value mean confidence interval includes 0).
LOST: 0
GAINED: 1
GM45 and Iron Lake had similar results (GM45 shown):
total instructions in shared programs: 4800332 -> 4798380 (-0.04%)
instructions in affected programs: 565995 -> 564043 (-0.34%)
helped: 1451
HURT: 0
helped stats (abs) min: 1 max: 20 x̄: 1.35 x̃: 1
helped stats (rel) min: 0.05% max: 5.26% x̄: 0.47% x̃: 0.31%
95% mean confidence interval for instructions value: -1.40 -1.29
95% mean confidence interval for instructions %-change: -0.50% -0.45%
Instructions are helped.
total cycles in shared programs: 122032318 -> 122027798 (<.01%)
cycles in affected programs: 8334868 -> 8330348 (-0.05%)
helped: 1029
HURT: 1
helped stats (abs) min: 2 max: 40 x̄: 4.43 x̃: 2
helped stats (rel) min: <.01% max: 1.83% x̄: 0.09% x̃: 0.04%
HURT stats (abs) min: 38 max: 38 x̄: 38.00 x̃: 38
HURT stats (rel) min: 0.25% max: 0.25% x̄: 0.25% x̃: 0.25%
95% mean confidence interval for cycles value: -4.70 -4.08
95% mean confidence interval for cycles %-change: -0.09% -0.08%
Cycles are helped.
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Alejandro Piñeiro <apinheiro@igalia.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
2018-03-09 13:45:01 -08:00
|
|
|
inst->src[0].file != UNIFORM))
|
|
|
|
|
continue;
|
|
|
|
|
|
|
|
|
|
/* An ABS source modifier can only be handled when processing a compare
|
|
|
|
|
* with a value other than zero.
|
|
|
|
|
*/
|
|
|
|
|
if (inst->src[0].abs &&
|
|
|
|
|
(inst->opcode != BRW_OPCODE_CMP || inst->src[1].is_zero()))
|
i965/fs: Add support for removing MOV.NZ instructions.
For some reason, we occasionally write the flag register with a MOV.NZ
instruction:
add(8) g25<1>F -g6<0,1,0>F g15<8,8,1>F
cmp.l.f0(8) g26<1>D g25<8,8,1>F 0F
mov.nz.f0(8) null g26<8,8,1>D
A MOV.NZ instruction on the result of a CMP is like comparing for
equality with true in C. It's useless. Removing it allows us to
generate:
add.l.f0(8) null -g6<0,1,0>F g15<8,8,1>F
total instructions in shared programs: 5955701 -> 5951657 (-0.07%)
instructions in affected programs: 302910 -> 298866 (-1.34%)
GAINED: 1
LOST: 0
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
2014-12-30 17:19:41 -08:00
|
|
|
continue;
|
|
|
|
|
|
i965/fs: Handle CMP.nz ... 0 and AND.nz ... 1 similarly in cmod propagation
Espically on platforms that do not natively generate 0u and ~0u for
Boolean results, we generate a lot of sequences where a CMP is
followed by an AND with 1. emit_bool_to_cond_code does this, for
example. On ILK, this results in a sequence like:
add(8) g3<1>F g8<8,8,1>F -g4<0,1,0>F
cmp.l.f0(8) g3<1>D g3<8,8,1>F 0F
and.nz.f0(8) null g3<8,8,1>D 1D
(+f0) iff(8) Jump: 6
The AND.nz is obviously redundant. By propagating the cmod, we can
instead generate
add.l.f0(8) null g8<8,8,1>F -g4<0,1,0>F
(+f0) iff(8) Jump: 6
Existing code already handles the propagation from the CMP to the ADD.
Shader-db results:
GM45 (0x2A42):
total instructions in shared programs: 3550829 -> 3550788 (-0.00%)
instructions in affected programs: 10028 -> 9987 (-0.41%)
helped: 24
Iron Lake (0x0046):
total instructions in shared programs: 4993146 -> 4993105 (-0.00%)
instructions in affected programs: 9675 -> 9634 (-0.42%)
helped: 24
Ivy Bridge (0x0166):
total instructions in shared programs: 6291870 -> 6291794 (-0.00%)
instructions in affected programs: 17914 -> 17838 (-0.42%)
helped: 48
Haswell (0x0426):
total instructions in shared programs: 5779256 -> 5779180 (-0.00%)
instructions in affected programs: 16694 -> 16618 (-0.46%)
helped: 48
Broadwell (0x162E):
total instructions in shared programs: 6823088 -> 6823014 (-0.00%)
instructions in affected programs: 15824 -> 15750 (-0.47%)
helped: 46
No chage on Sandy Bridge or on any platform when NIR is used.
v2: Add unit tests suggested by Matt. Remove spurious writes_flag()
check on scan_inst when scan_inst is known to be BRW_OPCODE_CMP (also
suggested by Matt).
v3: Fix some comments and remove some explicit int() casts in fs_reg
constructors in the unit tests. Both suggested by Matt.
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
2015-02-03 21:12:28 +02:00
|
|
|
/* Only an AND.NZ can be propagated. Many AND.Z instructions are
|
|
|
|
|
* generated (for ir_unop_not in fs_visitor::emit_bool_to_cond_code).
|
|
|
|
|
* Propagating those would require inverting the condition on the CMP.
|
|
|
|
|
* This changes both the flag value and the register destination of the
|
|
|
|
|
* CMP. That result may be used elsewhere, so we can't change its value
|
|
|
|
|
* on a whim.
|
|
|
|
|
*/
|
|
|
|
|
if (inst->opcode == BRW_OPCODE_AND &&
|
|
|
|
|
!(inst->src[1].is_one() &&
|
|
|
|
|
inst->conditional_mod == BRW_CONDITIONAL_NZ &&
|
|
|
|
|
!inst->src[0].negate))
|
|
|
|
|
continue;
|
|
|
|
|
|
i965/fs: Add support for removing MOV.NZ instructions.
For some reason, we occasionally write the flag register with a MOV.NZ
instruction:
add(8) g25<1>F -g6<0,1,0>F g15<8,8,1>F
cmp.l.f0(8) g26<1>D g25<8,8,1>F 0F
mov.nz.f0(8) null g26<8,8,1>D
A MOV.NZ instruction on the result of a CMP is like comparing for
equality with true in C. It's useless. Removing it allows us to
generate:
add.l.f0(8) null -g6<0,1,0>F g15<8,8,1>F
total instructions in shared programs: 5955701 -> 5951657 (-0.07%)
instructions in affected programs: 302910 -> 298866 (-1.34%)
GAINED: 1
LOST: 0
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
2014-12-30 17:19:41 -08:00
|
|
|
if (inst->opcode == BRW_OPCODE_MOV &&
|
2015-01-24 04:16:54 -08:00
|
|
|
inst->conditional_mod != BRW_CONDITIONAL_NZ)
|
2014-08-22 10:54:43 -07:00
|
|
|
continue;
|
|
|
|
|
|
2018-06-13 10:04:55 -07:00
|
|
|
/* A CMP with a second source of zero can match with anything. A CMP
|
|
|
|
|
* with a second source that is not zero can only match with an ADD
|
|
|
|
|
* instruction.
|
2018-09-12 17:16:50 -07:00
|
|
|
*
|
|
|
|
|
* Only apply this optimization to float-point sources. It can fail for
|
|
|
|
|
* integers. For inputs a = 0x80000000, b = 4, int(0x80000000) < 4, but
|
|
|
|
|
* int(0x80000000) - 4 overflows and results in 0x7ffffffc. that's not
|
|
|
|
|
* less than zero, so the flags get set differently than for (a < b).
|
2018-06-13 10:04:55 -07:00
|
|
|
*/
|
|
|
|
|
if (inst->opcode == BRW_OPCODE_CMP && !inst->src[1].is_zero()) {
|
2018-09-12 17:16:50 -07:00
|
|
|
if (brw_reg_type_is_floating_point(inst->src[0].type) &&
|
intel/compiler: split is_partial_write() into two variants
This function is used in two different scenarios that for 32-bit
instructions are the same, but for 16-bit instructions are not.
One scenario is that in which we are working at a SIMD8 register
level and we need to know if a register is fully defined or written.
This is useful, for example, in the context of liveness analysis or
register allocation, where we work with units of registers.
The other scenario is that in which we want to know if an instruction
is writing a full scalar component or just some subset of it. This is
useful, for example, in the context of some optimization passes
like copy propagation.
For 32-bit instructions (or larger), a SIMD8 dispatch will always write
at least a full SIMD8 register (32B) if the write is not partial. The
function is_partial_write() checks this to determine if we have a partial
write. However, when we deal with 16-bit instructions, that logic disables
some optimizations that should be safe. For example, a SIMD8 16-bit MOV will
only update half of a SIMD register, but it is still a complete write of the
variable for a SIMD8 dispatch, so we should not prevent copy propagation in
this scenario because we don't write all 32 bytes in the SIMD register
or because the write starts at offset 16B (wehere we pack components Y or
W of 16-bit vectors).
This is a problem for SIMD8 executions (VS, TCS, TES, GS) of 16-bit
instructions, which lose a number of optimizations because of this, most
important of which is copy-propagation.
This patch splits is_partial_write() into is_partial_reg_write(), which
represents the current is_partial_write(), useful for things like
liveness analysis, and is_partial_var_write(), which considers
the dispatch size to check if we are writing a full variable (rather
than a full register) to decide if the write is partial or not, which
is what we really want in many optimization passes.
Then the patch goes on and rewrites all uses of is_partial_write() to use
one or the other version. Specifically, we use is_partial_var_write()
in the following places: copy propagation, cmod propagation, common
subexpression elimination, saturate propagation and sel peephole.
Notice that the semantics of is_partial_var_write() exactly match the
current implementation of is_partial_write() for anything that is
32-bit or larger, so no changes are expected for 32-bit instructions.
Tested against ~5000 tests involving 16-bit instructions in CTS produced
the following changes in instruction counts:
Patched | Master | % |
================================================
SIMD8 | 621,900 | 706,721 | -12.00% |
================================================
SIMD16 | 93,252 | 93,252 | 0.00% |
================================================
As expected, the change only affects SIMD8 dispatches.
Reviewed-by: Topi Pohjolainen <topi.pohjolainen@intel.com>
2018-07-10 09:52:46 +02:00
|
|
|
cmod_propagate_cmp_to_add(devinfo, block, inst, dispatch_width))
|
2018-09-12 17:16:50 -07:00
|
|
|
progress = true;
|
|
|
|
|
|
2018-06-13 10:04:55 -07:00
|
|
|
continue;
|
|
|
|
|
}
|
|
|
|
|
|
2018-06-13 10:36:42 -07:00
|
|
|
if (inst->opcode == BRW_OPCODE_NOT) {
|
intel/compiler: split is_partial_write() into two variants
This function is used in two different scenarios that for 32-bit
instructions are the same, but for 16-bit instructions are not.
One scenario is that in which we are working at a SIMD8 register
level and we need to know if a register is fully defined or written.
This is useful, for example, in the context of liveness analysis or
register allocation, where we work with units of registers.
The other scenario is that in which we want to know if an instruction
is writing a full scalar component or just some subset of it. This is
useful, for example, in the context of some optimization passes
like copy propagation.
For 32-bit instructions (or larger), a SIMD8 dispatch will always write
at least a full SIMD8 register (32B) if the write is not partial. The
function is_partial_write() checks this to determine if we have a partial
write. However, when we deal with 16-bit instructions, that logic disables
some optimizations that should be safe. For example, a SIMD8 16-bit MOV will
only update half of a SIMD register, but it is still a complete write of the
variable for a SIMD8 dispatch, so we should not prevent copy propagation in
this scenario because we don't write all 32 bytes in the SIMD register
or because the write starts at offset 16B (wehere we pack components Y or
W of 16-bit vectors).
This is a problem for SIMD8 executions (VS, TCS, TES, GS) of 16-bit
instructions, which lose a number of optimizations because of this, most
important of which is copy-propagation.
This patch splits is_partial_write() into is_partial_reg_write(), which
represents the current is_partial_write(), useful for things like
liveness analysis, and is_partial_var_write(), which considers
the dispatch size to check if we are writing a full variable (rather
than a full register) to decide if the write is partial or not, which
is what we really want in many optimization passes.
Then the patch goes on and rewrites all uses of is_partial_write() to use
one or the other version. Specifically, we use is_partial_var_write()
in the following places: copy propagation, cmod propagation, common
subexpression elimination, saturate propagation and sel peephole.
Notice that the semantics of is_partial_var_write() exactly match the
current implementation of is_partial_write() for anything that is
32-bit or larger, so no changes are expected for 32-bit instructions.
Tested against ~5000 tests involving 16-bit instructions in CTS produced
the following changes in instruction counts:
Patched | Master | % |
================================================
SIMD8 | 621,900 | 706,721 | -12.00% |
================================================
SIMD16 | 93,252 | 93,252 | 0.00% |
================================================
As expected, the change only affects SIMD8 dispatches.
Reviewed-by: Topi Pohjolainen <topi.pohjolainen@intel.com>
2018-07-10 09:52:46 +02:00
|
|
|
progress = cmod_propagate_not(devinfo, block, inst, dispatch_width) || progress;
|
2018-06-13 10:36:42 -07:00
|
|
|
continue;
|
|
|
|
|
}
|
|
|
|
|
|
2015-01-03 12:18:15 -08:00
|
|
|
bool read_flag = false;
|
2015-10-20 11:16:00 +02:00
|
|
|
foreach_inst_in_block_reverse_starting_from(fs_inst, scan_inst, inst) {
|
2016-09-01 19:34:18 -07:00
|
|
|
if (regions_overlap(scan_inst->dst, scan_inst->size_written,
|
|
|
|
|
inst->src[0], inst->size_read(0))) {
|
intel/compiler: split is_partial_write() into two variants
This function is used in two different scenarios that for 32-bit
instructions are the same, but for 16-bit instructions are not.
One scenario is that in which we are working at a SIMD8 register
level and we need to know if a register is fully defined or written.
This is useful, for example, in the context of liveness analysis or
register allocation, where we work with units of registers.
The other scenario is that in which we want to know if an instruction
is writing a full scalar component or just some subset of it. This is
useful, for example, in the context of some optimization passes
like copy propagation.
For 32-bit instructions (or larger), a SIMD8 dispatch will always write
at least a full SIMD8 register (32B) if the write is not partial. The
function is_partial_write() checks this to determine if we have a partial
write. However, when we deal with 16-bit instructions, that logic disables
some optimizations that should be safe. For example, a SIMD8 16-bit MOV will
only update half of a SIMD register, but it is still a complete write of the
variable for a SIMD8 dispatch, so we should not prevent copy propagation in
this scenario because we don't write all 32 bytes in the SIMD register
or because the write starts at offset 16B (wehere we pack components Y or
W of 16-bit vectors).
This is a problem for SIMD8 executions (VS, TCS, TES, GS) of 16-bit
instructions, which lose a number of optimizations because of this, most
important of which is copy-propagation.
This patch splits is_partial_write() into is_partial_reg_write(), which
represents the current is_partial_write(), useful for things like
liveness analysis, and is_partial_var_write(), which considers
the dispatch size to check if we are writing a full variable (rather
than a full register) to decide if the write is partial or not, which
is what we really want in many optimization passes.
Then the patch goes on and rewrites all uses of is_partial_write() to use
one or the other version. Specifically, we use is_partial_var_write()
in the following places: copy propagation, cmod propagation, common
subexpression elimination, saturate propagation and sel peephole.
Notice that the semantics of is_partial_var_write() exactly match the
current implementation of is_partial_write() for anything that is
32-bit or larger, so no changes are expected for 32-bit instructions.
Tested against ~5000 tests involving 16-bit instructions in CTS produced
the following changes in instruction counts:
Patched | Master | % |
================================================
SIMD8 | 621,900 | 706,721 | -12.00% |
================================================
SIMD16 | 93,252 | 93,252 | 0.00% |
================================================
As expected, the change only affects SIMD8 dispatches.
Reviewed-by: Topi Pohjolainen <topi.pohjolainen@intel.com>
2018-07-10 09:52:46 +02:00
|
|
|
if (scan_inst->is_partial_var_write(dispatch_width) ||
|
2016-09-01 21:16:14 -07:00
|
|
|
scan_inst->dst.offset != inst->src[0].offset ||
|
2015-08-11 16:16:42 -07:00
|
|
|
scan_inst->exec_size != inst->exec_size)
|
2014-08-22 10:54:43 -07:00
|
|
|
break;
|
|
|
|
|
|
2015-03-17 19:17:15 -07:00
|
|
|
/* CMP's result is the same regardless of dest type. */
|
|
|
|
|
if (inst->conditional_mod == BRW_CONDITIONAL_NZ &&
|
|
|
|
|
scan_inst->opcode == BRW_OPCODE_CMP &&
|
|
|
|
|
(inst->dst.type == BRW_REGISTER_TYPE_D ||
|
|
|
|
|
inst->dst.type == BRW_REGISTER_TYPE_UD)) {
|
|
|
|
|
inst->remove(block);
|
|
|
|
|
progress = true;
|
i965/fs: Handle CMP.nz ... 0 and AND.nz ... 1 similarly in cmod propagation
Espically on platforms that do not natively generate 0u and ~0u for
Boolean results, we generate a lot of sequences where a CMP is
followed by an AND with 1. emit_bool_to_cond_code does this, for
example. On ILK, this results in a sequence like:
add(8) g3<1>F g8<8,8,1>F -g4<0,1,0>F
cmp.l.f0(8) g3<1>D g3<8,8,1>F 0F
and.nz.f0(8) null g3<8,8,1>D 1D
(+f0) iff(8) Jump: 6
The AND.nz is obviously redundant. By propagating the cmod, we can
instead generate
add.l.f0(8) null g8<8,8,1>F -g4<0,1,0>F
(+f0) iff(8) Jump: 6
Existing code already handles the propagation from the CMP to the ADD.
Shader-db results:
GM45 (0x2A42):
total instructions in shared programs: 3550829 -> 3550788 (-0.00%)
instructions in affected programs: 10028 -> 9987 (-0.41%)
helped: 24
Iron Lake (0x0046):
total instructions in shared programs: 4993146 -> 4993105 (-0.00%)
instructions in affected programs: 9675 -> 9634 (-0.42%)
helped: 24
Ivy Bridge (0x0166):
total instructions in shared programs: 6291870 -> 6291794 (-0.00%)
instructions in affected programs: 17914 -> 17838 (-0.42%)
helped: 48
Haswell (0x0426):
total instructions in shared programs: 5779256 -> 5779180 (-0.00%)
instructions in affected programs: 16694 -> 16618 (-0.46%)
helped: 48
Broadwell (0x162E):
total instructions in shared programs: 6823088 -> 6823014 (-0.00%)
instructions in affected programs: 15824 -> 15750 (-0.47%)
helped: 46
No chage on Sandy Bridge or on any platform when NIR is used.
v2: Add unit tests suggested by Matt. Remove spurious writes_flag()
check on scan_inst when scan_inst is known to be BRW_OPCODE_CMP (also
suggested by Matt).
v3: Fix some comments and remove some explicit int() casts in fs_reg
constructors in the unit tests. Both suggested by Matt.
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Matt Turner <mattst88@gmail.com>
2015-02-03 21:12:28 +02:00
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
|
2015-03-17 19:17:15 -07:00
|
|
|
/* If the AND wasn't handled by the previous case, it isn't safe
|
|
|
|
|
* to remove it.
|
|
|
|
|
*/
|
|
|
|
|
if (inst->opcode == BRW_OPCODE_AND)
|
|
|
|
|
break;
|
|
|
|
|
|
2019-02-11 16:02:15 -08:00
|
|
|
/* Not safe to use inequality operators if the types are different
|
|
|
|
|
*/
|
|
|
|
|
if (scan_inst->dst.type != inst->src[0].type &&
|
|
|
|
|
inst->conditional_mod != BRW_CONDITIONAL_Z &&
|
|
|
|
|
inst->conditional_mod != BRW_CONDITIONAL_NZ)
|
|
|
|
|
break;
|
|
|
|
|
|
2015-02-27 10:22:21 -08:00
|
|
|
/* Comparisons operate differently for ints and floats */
|
2018-06-19 13:34:57 -07:00
|
|
|
if (scan_inst->dst.type != inst->dst.type) {
|
|
|
|
|
/* We should propagate from a MOV to another instruction in a
|
|
|
|
|
* sequence like:
|
|
|
|
|
*
|
|
|
|
|
* and(16) g31<1>UD g20<8,8,1>UD g22<8,8,1>UD
|
|
|
|
|
* mov.nz.f0(16) null<1>F g31<8,8,1>D
|
|
|
|
|
*/
|
|
|
|
|
if (inst->opcode == BRW_OPCODE_MOV) {
|
|
|
|
|
if ((inst->src[0].type != BRW_REGISTER_TYPE_D &&
|
|
|
|
|
inst->src[0].type != BRW_REGISTER_TYPE_UD) ||
|
|
|
|
|
(scan_inst->dst.type != BRW_REGISTER_TYPE_D &&
|
|
|
|
|
scan_inst->dst.type != BRW_REGISTER_TYPE_UD)) {
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
} else if (scan_inst->dst.type == BRW_REGISTER_TYPE_F ||
|
|
|
|
|
inst->dst.type == BRW_REGISTER_TYPE_F) {
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
}
|
2015-02-27 10:22:21 -08:00
|
|
|
|
2015-01-24 04:16:54 -08:00
|
|
|
/* If the instruction generating inst's source also wrote the
|
|
|
|
|
* flag, and inst is doing a simple .nz comparison, then inst
|
|
|
|
|
* is redundant - the appropriate value is already in the flag
|
|
|
|
|
* register. Delete inst.
|
|
|
|
|
*/
|
|
|
|
|
if (inst->conditional_mod == BRW_CONDITIONAL_NZ &&
|
|
|
|
|
!inst->src[0].negate &&
|
2016-05-18 22:40:40 -07:00
|
|
|
scan_inst->flags_written()) {
|
i965/fs: Add support for removing MOV.NZ instructions.
For some reason, we occasionally write the flag register with a MOV.NZ
instruction:
add(8) g25<1>F -g6<0,1,0>F g15<8,8,1>F
cmp.l.f0(8) g26<1>D g25<8,8,1>F 0F
mov.nz.f0(8) null g26<8,8,1>D
A MOV.NZ instruction on the result of a CMP is like comparing for
equality with true in C. It's useless. Removing it allows us to
generate:
add.l.f0(8) null -g6<0,1,0>F g15<8,8,1>F
total instructions in shared programs: 5955701 -> 5951657 (-0.07%)
instructions in affected programs: 302910 -> 298866 (-1.34%)
GAINED: 1
LOST: 0
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
2014-12-30 17:19:41 -08:00
|
|
|
inst->remove(block);
|
|
|
|
|
progress = true;
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
|
2016-04-25 15:39:29 -07:00
|
|
|
/* The conditional mod of the CMP/CMPN instructions behaves
|
|
|
|
|
* specially because the flag output is not calculated from the
|
|
|
|
|
* result of the instruction, but the other way around, which
|
|
|
|
|
* means that even if the condmod to propagate and the condmod
|
|
|
|
|
* from the CMP instruction are the same they will in general give
|
|
|
|
|
* different results because they are evaluated based on different
|
|
|
|
|
* inputs.
|
|
|
|
|
*/
|
|
|
|
|
if (scan_inst->opcode == BRW_OPCODE_CMP ||
|
|
|
|
|
scan_inst->opcode == BRW_OPCODE_CMPN)
|
|
|
|
|
break;
|
2017-10-04 13:49:29 -07:00
|
|
|
|
|
|
|
|
/* From the Sky Lake PRM Vol. 7 "Assigning Conditional Mods":
|
|
|
|
|
*
|
|
|
|
|
* * Note that the [post condition signal] bits generated at
|
|
|
|
|
* the output of a compute are before the .sat.
|
|
|
|
|
*/
|
|
|
|
|
if (scan_inst->saturate)
|
|
|
|
|
break;
|
2017-10-04 13:20:52 -07:00
|
|
|
|
|
|
|
|
/* From the Sky Lake PRM, Vol 2a, "Multiply":
|
|
|
|
|
*
|
|
|
|
|
* "When multiplying integer data types, if one of the sources
|
|
|
|
|
* is a DW, the resulting full precision data is stored in
|
|
|
|
|
* the accumulator. However, if the destination data type is
|
|
|
|
|
* either W or DW, the low bits of the result are written to
|
|
|
|
|
* the destination register and the remaining high bits are
|
|
|
|
|
* discarded. This results in undefined Overflow and Sign
|
|
|
|
|
* flags. Therefore, conditional modifiers and saturation
|
|
|
|
|
* (.sat) cannot be used in this case."
|
|
|
|
|
*
|
|
|
|
|
* We just disallow cmod propagation on all integer multiplies.
|
|
|
|
|
*/
|
|
|
|
|
if (!brw_reg_type_is_floating_point(scan_inst->dst.type) &&
|
|
|
|
|
scan_inst->opcode == BRW_OPCODE_MUL)
|
|
|
|
|
break;
|
2016-04-25 15:39:29 -07:00
|
|
|
|
2015-01-24 04:16:54 -08:00
|
|
|
/* Otherwise, try propagating the conditional. */
|
2014-12-30 12:18:57 -08:00
|
|
|
enum brw_conditional_mod cond =
|
|
|
|
|
inst->src[0].negate ? brw_swap_cmod(inst->conditional_mod)
|
|
|
|
|
: inst->conditional_mod;
|
|
|
|
|
|
2014-08-22 10:54:43 -07:00
|
|
|
if (scan_inst->can_do_cmod() &&
|
2015-01-03 12:18:15 -08:00
|
|
|
((!read_flag && scan_inst->conditional_mod == BRW_CONDITIONAL_NONE) ||
|
2014-12-30 12:18:57 -08:00
|
|
|
scan_inst->conditional_mod == cond)) {
|
|
|
|
|
scan_inst->conditional_mod = cond;
|
2014-08-22 10:54:43 -07:00
|
|
|
inst->remove(block);
|
|
|
|
|
progress = true;
|
|
|
|
|
}
|
|
|
|
|
break;
|
|
|
|
|
}
|
|
|
|
|
|
2016-05-18 22:40:40 -07:00
|
|
|
if (scan_inst->flags_written())
|
2014-08-22 10:54:43 -07:00
|
|
|
break;
|
2015-01-03 12:18:15 -08:00
|
|
|
|
2016-05-18 22:40:40 -07:00
|
|
|
read_flag = read_flag || scan_inst->flags_read(devinfo);
|
2014-08-22 10:54:43 -07:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return progress;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
bool
|
|
|
|
|
fs_visitor::opt_cmod_propagation()
|
|
|
|
|
{
|
|
|
|
|
bool progress = false;
|
|
|
|
|
|
|
|
|
|
foreach_block_reverse(block, cfg) {
|
intel/compiler: split is_partial_write() into two variants
This function is used in two different scenarios that for 32-bit
instructions are the same, but for 16-bit instructions are not.
One scenario is that in which we are working at a SIMD8 register
level and we need to know if a register is fully defined or written.
This is useful, for example, in the context of liveness analysis or
register allocation, where we work with units of registers.
The other scenario is that in which we want to know if an instruction
is writing a full scalar component or just some subset of it. This is
useful, for example, in the context of some optimization passes
like copy propagation.
For 32-bit instructions (or larger), a SIMD8 dispatch will always write
at least a full SIMD8 register (32B) if the write is not partial. The
function is_partial_write() checks this to determine if we have a partial
write. However, when we deal with 16-bit instructions, that logic disables
some optimizations that should be safe. For example, a SIMD8 16-bit MOV will
only update half of a SIMD register, but it is still a complete write of the
variable for a SIMD8 dispatch, so we should not prevent copy propagation in
this scenario because we don't write all 32 bytes in the SIMD register
or because the write starts at offset 16B (wehere we pack components Y or
W of 16-bit vectors).
This is a problem for SIMD8 executions (VS, TCS, TES, GS) of 16-bit
instructions, which lose a number of optimizations because of this, most
important of which is copy-propagation.
This patch splits is_partial_write() into is_partial_reg_write(), which
represents the current is_partial_write(), useful for things like
liveness analysis, and is_partial_var_write(), which considers
the dispatch size to check if we are writing a full variable (rather
than a full register) to decide if the write is partial or not, which
is what we really want in many optimization passes.
Then the patch goes on and rewrites all uses of is_partial_write() to use
one or the other version. Specifically, we use is_partial_var_write()
in the following places: copy propagation, cmod propagation, common
subexpression elimination, saturate propagation and sel peephole.
Notice that the semantics of is_partial_var_write() exactly match the
current implementation of is_partial_write() for anything that is
32-bit or larger, so no changes are expected for 32-bit instructions.
Tested against ~5000 tests involving 16-bit instructions in CTS produced
the following changes in instruction counts:
Patched | Master | % |
================================================
SIMD8 | 621,900 | 706,721 | -12.00% |
================================================
SIMD16 | 93,252 | 93,252 | 0.00% |
================================================
As expected, the change only affects SIMD8 dispatches.
Reviewed-by: Topi Pohjolainen <topi.pohjolainen@intel.com>
2018-07-10 09:52:46 +02:00
|
|
|
progress = opt_cmod_propagation_local(devinfo, block, dispatch_width) || progress;
|
2014-08-22 10:54:43 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
if (progress)
|
|
|
|
|
invalidate_live_intervals();
|
|
|
|
|
|
|
|
|
|
return progress;
|
|
|
|
|
}
|