random-code Day 3 - Testing - Building


I got inspired by Zac Hatfield-Dodds's blog post Sufficiently Advanced Testing to pursue the challenge of construction random programs. To get the intro of what I've worked on so far check out the Day 1 post and the latest Day 2 post


Testing is made pretty easy with pytest. In a few lines of code, we can make a little bit of Python, convert it to an AST, make sure we're working on the correct node and make sure we've discovered the right names with something like this:

def test_IfExp():
    ast = _strip_expr(str_to_ast("""0 if name else 1"""))
    assert isinstance(ast, IfExp)
    assert ["name"] == nested_unpack(ast)

The only bit of magic is that str_to_ast will remove the Module that Python assumes is the start of each string (as if it were its own file). With small examples like this, we can fairly easily cover the AST concept by concept.

There's some more interesting stuff coming, but pytest makes it really easy to do these little tests to build up confidence things are working as expected.

Testing Win

Testing the BoolOp case quickly surfaced a typo

In the test we have:

def test_BoolOp():
    ast = str_to_ast("""name or False""")
    assert ["name"] == nested_unpack(ast)

But we get an error:

E       AssertionError: assert ['name'] == [<ast.Name ob...7b7aa5140970>]
E         At index 0 diff: 'name' != <ast.Name object at 0x7b7aa5140970>

The typo came down to returning the wrong thing: When iterating over the values in a BoolOp, we kept returning the value instead of the name id...

for v in element.values:
    for vid in nested_unpack(v, top_level):
        yield v


for v in element.values:
    for vid in nested_unpack(v, top_level):
        yield vid

Ultimately not a very complicated or insiduous bug, but definitely hard to catch trying to drive-by inspect the code while reading a diff


The other quick win for the day was logging. I'd originally been trying to manage my own printing and debug info by passing around configuration, but moving to using Python's built in logging library has made it much easier.

There's still some stuff that I'm trying to figure out (specifically indenting the logging based on the depth of the iteration in the AST to make it easier to visually see which data groups together) but for now I'm pretty satisfied with being able to turn on debug information for an individual test but otherwise keep the output pretty muted.