There's a lot of confusion over use of the terms hypothesis, theory and fact in science. We have popular usage, popular impression of how scientists use the terms, and how the terms actually get used in science. All three share some things in common, but none match. This confusion is no minor matter because popular ignorance about how the terms are really used in science makes it easier for creationists and other religious apologists misrepresent science for their own ideological purposes.
Hypothesis vs. Theory
Popularly, hypothesis and theory are used almost interchangeable to refer to vague or fuzzy ideas which seem to have a low probability of being true. In many popular and idealistic descriptions of science, the two are used to refer to the same idea, but in different stages of development. Thus, an idea is just a "hypothesis" when it is new and relatively untested - in other words, when the probability of error and correction are high. However, once it has successfully survived repeated testing, has become more complex, is found to explain a great deal, and has made many interesting predictions, it achieves the status of "theory."
It makes sense to use terminology to differentiate younger from more established ideas in science, but such differentiation is difficult to make. How much testing is required to move from hypothesis to theory? How much complexity is needed to stop being a hypothesis and start being a theory?
Scientists themselves aren't rigorous in their use of the terms. For example, you can readily find references to the "Steady State Theory" of the universe - it's called a "theory" (even though it has evidence against it and many consider it disproven) because it has logical structure, is logically consistent, is testable, etc.
The only consistent differentiation between hypothesis and theory which scientists actually use is that an idea is a hypothesis when it is being actively tested and investigated, but a theory in other contexts. It is probably because of this that the confusion described above has developed. While in the process of testing an idea (now hypothesis), that idea is treated very specifically as a tentative explanation. It can, then, be easy to conclude that hypothesis always refers to a tentative explanation, whatever the context.
As far as "facts" are concerned, scientists will caution you that even though they will appear to be using the term in the same way as everyone else, there are background assumptions which are crucial. When most people refer to a "fact," the are talking about something which is definitely, absolutely and unquestionably true. For scientists, a fact is something which is assumed to be true, at least for the purposes of whatever they are doing at the moment, but which might be refuted at some point.
It is this implicit fallibilism which helps differentiate science from other human endeavors. It is certainly the case that scientists will act as if something is definitely true and not give much thought to the possibility that it is wrong - but that doesn't mean that they ignore it completely. This quote from Stephen Jay Gould illustrates the issue nicely:
Moreover, 'fact' doesn't mean 'absolute certainty'; there ain't no such animal in an exciting and complex world. The final proofs of logic and mathematics flow deductively from stated premises and achieve certainty only because they are NOT about the empirical world. ...In science 'fact' can only mean 'confirmed to such a degree that it would be perverse to withhold provisional consent.' I suppose that apples might start to rise tomorrow, but the possibility does not merit equal time in physics classrooms.
The key phrase is "provisional consent" - it is accepted as true provisionally, which means only for the time being. It is accepted as true at this time and for this context because we have every reason to do so and no reason not to do so. If, however, good reasons to reconsider this position arise, then we should begin to withdraw our consent.
Note also that Gould introduces another important point: for many scientists, once a theory has been confirmed and reconfirmed over and over again, we get to the point that it will be treated as a "fact" for pretty much all contexts and purposes. Scientists may refer to Einstein's Special Theory of Relativity, but in most contexts Einstein's ideas here are treated as fact - treated as if they are simply true and accurate descriptions about the world.
Fallibilism in Science
One common feature for facts, theories, and hypotheses in science is that they are all treated as fallible — the likelihood of error might vary greatly, but they are still regarded as something less than absolute truth. This is often regarded as a flaw in science, a reason why science can't provide human what they need — usually in contrast to religion and faith which somehow can allegedly provide absolute truth.
This is a mistake: the fallibilism of science is precisely what makes it better than the alternatives. By acknowledging the fallibility of humanity, science always remains open to new information, new discoveries, and new ideas. The problems in religion can generally be traced back to the fact that they rely so much on ideas and opinions established centuries or millennia in the past; the success of science can be traced to the fact that new information forces scientists to revise what they are doing.
Religions don't have hypotheses, theories, or even facts — religions just have dogmas which are presented as if they were absolute truths regardless of what new information might come along. This is why religion never created new medical treatments, a radio, an airplane, or anything remotely close. Science isn't perfect, but scientists know this and that's precisely what makes it so useful, so successful, and so much better than the alternatives.