This is well illustrated by the following story, which I was told while a postdoc in Germany. Unfortunately, I am not sure who told me this story. Regardless, I think this story is true, and I know the idea behind it is certainly true.
At a scientific conference in the early 1990's, a Russian researcher presented some of his recent results in polymer simulations. His data were graphed in a standard way, so that different slopes in the data curves would indicate different kinds of motion of the polymers. The scientist explained what his data meant and their implications for polymer theory. Then he noticed a new curve along with the others, one with which he was not familiar. "I'm not familiar with this curve," he said, "but it must be a new run made by my postdoc." He proceeded to explain this curve, too, in a way that was logical, persuasive, and consistent with the other curves.
Then he moved on to the next slide. As he did, everyone saw that the "curve" he had just "explained" was actually a crack on the projector's face plate.
This is funny, yes, but also disturbing. The data is supposed to drive science, and in the long run, I still believe it does, but in the short run, we are still too susceptible to seeing patterns that are not really there. This is analogous to seeing shapes in the clouds, except that the explanations will seem very real -- they will be plausible and consistent with other experimental, simulation, and theoretical results.
This is a good reason, by the way, to doubt exciting new results until they have been independently confirmed a few times.