We provide for this by changing the value of the argument-list
production from a list of strings (string_list_t) to a new
data-structure that holds a list of lists of strings
(argument_list_t).
Then we print the final string list up at the top-level content
production along with all other printing.
Additionally, having macro-expansion productions that create values
will make it easier to solve problems like composed function-like
macro invocations in the future.
Previously, printing was occurring all over the place. Here we
document that it should all be happening at the top-level content
production, and we move the printing of directive newlines.
The printing of expanded macros is still happening in lower-level
productions, but we plan to fix that soon.
Instead of "parameter_list" and "replacement_list" just use
"parameters" and "replacements". This is consistent with the existing
"arguments" and keeps the line length down in the face of the
now-longer "string_list_t" rather than "list_t".
It seems strange to always be returning SPACE tokens, but since we
were already needing to return a SPACE token in some cases, this
actually simplifies our lexer.
This also allows us to fix two whitespace-handling differences
compared to "gcc -E" so that now the recent modification to the test
suite passes once again.
Previously our parser was incorrectly treating this case as a
function-like macro. We fix this by conditionally passing a SPACE
token from the lexer, (but only immediately after the identifier
immediately after #define).
Previously, an empty argument could be parsed as either an "argument_list"
directly or first as an "argument" and then an "argument_list".
We fix this by removing the possibility of an empty "argument_list"
directly.
We accept the structure of arguments in both macro definition and
macro invocation, but we don't yet expand those arguments. This is
just enough code to pass the recently-added tests, but does not yet
provide any sort of useful function-like macro.
This is just a minor style improvement for now. But the same
mechanism, (having the lexer peek into the table of defined macros),
will be essential when we add function-like macros in addition to the
current object-like macros.
Previously we had two copies of all top-level actions, (once in a list
context and once in a non-list context). Much simpler to instead have
a single list-context production with no action and then only have the
actions in their own non-list contexts.
We are able to remove all state by simply passing NEWLINE through
as a token unconditionally (as opposed to only passing newline when
on a driective line as we did previously).
This isn't ideal for two reasons:
1. There's a bunch of stateful redundancy in the lexer that should be
cleaned up.
2. The hash table does not provide a mechanism to delete an entry, so
we waste memory to add a new NULL entry in front of the existing
entry with the same key.
But this does at least work, (it passes the recently added undef test
case).
The lexer was previously using strdup (expecting the parser to free),
but is now more consistent, easier to use, and slightly more efficent
by using talloc along with the parser.
Also, we add xtalloc and xtalloc_strdup wrappers around talloc and
talloc_strdup to put all of the out-of-memory-checking code in one
place.
We now store a list of tokens in our hash-table rather than a single
string. This lets us replace each macro in the value as necessary.
This code adds a link dependency on talloc which does exactly what we
want in terms of memory management for a parser.
The 3 tests added in the previous commit now pass.
The fix is as simple as adding a loop to continue to lookup values
in the hash table until one of the following termination conditions:
1. The token we look up has no definition
2. We get back the original symbol we started with
This second termination condition prevents infinite iteration.
Most of the current problems were (mostly) harmless things like
missing declarations, but there was at least one real error, (reversed
argument order for yyerrror).
This allows the final program to be 100% "valgrind clean", (freeing
all memory that it allocates). This will make it much easier to ensure
that any allocation that parser actions perform are also cleaned up.