i965: Fix brw_finish_batch to grow the batchbuffer.

brw_finish_batch emits commands needed at the end of every batch buffer,
including any workarounds.  In the past, we freed up some "reserved"
batch space before calling it, so we would never have to flush during
it.  This was error prone and easy to screw up, so I deleted it a while
back in favor of growing the batch.

There were two problems:

1. We're in the middle of flushing, so brw->no_batch_wrap is guaranteed
   not to be set.  Using BEGIN_BATCH() to emit commands would cause a
   recursive flush rather than growing the buffer as intended.

2. We already recorded the throttling batch before growing, which
   replaces brw->batch.bo with a different (larger) buffer.  So growing
   would break throttling.

These are easily remedied by shuffling some code around and whacking
brw->no_batch_wrap in brw_finish_batch().  This also now includes the
final workarounds in the batch usage statistics.  Found by inspection.

Fixes: 2c46a67b41 (i965: Delete BATCH_RESERVED handling.)

Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
This commit is contained in:
Kenneth Graunke 2017-09-18 09:55:57 -07:00
parent 5a746021ce
commit 3bec992e36

View file

@ -631,6 +631,8 @@ brw_finish_batch(struct brw_context *brw)
{
const struct gen_device_info *devinfo = &brw->screen->devinfo;
brw->no_batch_wrap = true;
/* Capture the closing pipeline statistics register values necessary to
* support query objects (in the non-hardware context world).
*/
@ -672,6 +674,8 @@ brw_finish_batch(struct brw_context *brw)
/* Round batchbuffer usage to 2 DWORDs. */
intel_batchbuffer_emit_dword(&brw->batch, MI_NOOP);
}
brw->no_batch_wrap = false;
}
static void
@ -885,6 +889,12 @@ _intel_batchbuffer_flush_fence(struct brw_context *brw,
if (USED_BATCH(brw->batch) == 0)
return 0;
/* Check that we didn't just wrap our batchbuffer at a bad time. */
assert(!brw->no_batch_wrap);
brw_finish_batch(brw);
intel_upload_finish(brw);
if (brw->throttle_batch[0] == NULL) {
brw->throttle_batch[0] = brw->batch.bo;
brw_bo_reference(brw->throttle_batch[0]);
@ -904,13 +914,6 @@ _intel_batchbuffer_flush_fence(struct brw_context *brw,
brw->batch.state_relocs.reloc_count);
}
brw_finish_batch(brw);
intel_upload_finish(brw);
/* Check that we didn't just wrap our batchbuffer at a bad time. */
assert(!brw->no_batch_wrap);
ret = do_flush_locked(brw, in_fence_fd, out_fence_fd);
if (unlikely(INTEL_DEBUG & DEBUG_SYNC)) {