The presentation went quite well, and I got a lot of excellent feedback. I was also able to talk to my dad and get some more ideas.
Here is what I am going to do:
First I will lay out a hypothesis on what good code looks like. I will then choose a batch of code snippets that represent my hypothesis, and I will also select some snippets that are nothing like my hypothesis, as well as some random snippets.
Next I will collect audience feedback on the snippets and use it to refine my hypothesis. I will take their suggestions to refine the snippets and further polarize them.
After I implement the last group's suggestions, I will show the snippets to a new group and see if there has been any improvement. From this I can begin to derive some knowledge about what people 'think' good code looks like, and what people will actually rate as good code.
This discrepancy alone will be valuable because it will demonstrate whether or not we know how to write code that we will like. As most developer's know, their own code is always excellent and clever... for a few months, but when you come back to it a year later it might not be so good. This could be because we have gotten much better at writing code, but it could also be because we never really thought it was good in the first place. We merely implemented what we thought were good practices, and not what we actually valued in code.
Once I have gathered enough data, and refined my methods such that I can accurately predict if people will perceive a code segment as good or bad, I can then work to create a flurry of good code segments.
With these I can begin offering more talks, still collecting data, but also performing a little Clockwork Orange. I will be able to show slides with code that I know almost everyone agrees to be good, and use them to 'condition' the audience.
I can perform little experiments where everyone writes a small function at the start of the talk, and then every 5 slides people will pass their functions around and have others rate them. We will see if the average ratings change, and for what reasons. We would then write another function half way through, perform the same task, but see if the ratings have improved.
Lots of value to be gained here, and I don't even know the half of it.