In a time where automation and technologies thrive, there are some significant questions that we should ask. Can rationality be measured? Is group rational decision making more rational than individual decision making? Is machine rationality subjective? Real-world rationality In the real world, problems are predominantly non-convex, and this makes the idea of rational
decision making essentially unachievable. This is called
bounded rationality. There is a trade-off between the amount of information used for decision making and the complexity of the decision model used. Prof Marwala, the author of Handbook of Machine Learning, deliberated whether AI machines are more rational than humans. “Machines and humans are both bounded rationally”, he explained. “But because of the trade-off between model complexity and model dimension, and the limitation of optimisation routines, intelligent machines are more rational than human beings”. Humans are risk-averse He continued to explain that rational decision making in its linguistic description means making logical choices. A rational agent optimally processes all relevant information to achieve its goal. Rationality has two elements: the use of relevant information, and the efficient processing of such information. “Relevant information is incomplete, imperfect and the processing engine, which is a brain for humans, is suboptimal. Humans are risk-averse rather than utility maximizers”, he said. Prof Metz is of the opinion that an action is rational just insofar as it maximises the good and minimises the bad. “It would be irrational to programme an automated system, or for AI to decide, only in ways that maximise outcomes”, he said. “Human rights, among other considerations, are also rational to observe”.
Prof Tshilidzi Marwala and Prof Thaddeus Metz
ALUMNI IMPUMELELO
31
Made with FlippingBook flipbook maker