After reading several articles on agent-based modeling, I am struggling to answer a question that I've been pondering: How do you know that your agent-based model has a complete set of information?
When modeling customer behavior at a supermarket or amusement park, traders' behavior at NASDAQ, driver behavior in traffic, or any other human phenomenon, there are countless variables of human behavior that must be taken into account. When these models form a conclusion about how people act, and how theoretical changes in the structure of an institutional will affect human actions, how do they know that they have not left out a crucial piece of our thought processes that may be lost among the myriad of processes that have already been accounted for.
For example, in the supermarket model that has been discussed in one of my earlier blogs, a supermarket owner makes management decisions based upon his judgement of human action simulated by the model. But how does he know that, once entering a supermarket, shoppers' behavior won't be influenced by a factor that has not been accounted for -- for example, the shopper is in a hurry, or the shopper prefers some brands over others. If these "forgotten factors" become prevalent enough, the findings of the model could become useless.
It seems to me that unless a model really strives to replicate human behavior in its entirety, its findings can be called into question.