Any model of predictive system can be sorted into four parts: collecting data, cleaning or refining the data, identifying patterns, then finally making predictions. The process of data collection consists of gathering any data that is relevant to the object of consideration or prediction. This could mean indexing web pages for similar topics, or picking someone’s brain for like interests or significances. To refine this data, one must sort through everything they have and get rid of things that are unreliable or unwieldy. For instance, web pages that have too many ads or load too slowly, or memories that are too hazy to be considered reliable. Now begins the process of identifying patterns – an activity that is inherent in human beings due to the evolutionary trait of making the brain’s energy consumption more efficient. After any patterns in the data are found and recorded, one can use logic to hypothesize an event that is likely to occur in the future based on previous occurrences and patterns. Some events are easier to predict than others, and some whole business models are built on. For instance, the stock market consists of a bunch of entities making educated guesses on how prices will change in the future based on previous patterns and current events. Weather channels use sophisticated tools to measure data and foretell a storm or sunny day in advance. The degree to which these predictions are reliable relies on the amount of previous data recorded as well as any patterns that are visible in that data.