On One Foot
LARGE LANGUAGE MODELS (LLMs)— the algorithms that underlie generative AI chatbots like ChatGPT and Gemini— are transforming daily life. Some people already use them as a matter of routine instead of traditional internet searches; many students use them (even when they aren’t supposed to) to complete assign- ments; many companies have dabbled in replacing work traditionally done by humans—especially writing and design —with material produced by them. And a smaller but growing number of people turn to these chatbots for more intimate needs, spanning everything from friend- ship to therapy. This extends to spiritual and religious life as well. Is it okay to ask a chatbot a halakhic question, and rely on its reasoning for an answer? Are LLMs accurate sources of Jewish knowledge? And what does it mean to ask a machine for Jewish advice rather than turning to books or rabbis? Here, we consider a question of contemporary relevance and explore how sources both classical and modern address it. by AVI FINEGOLD SHOULD YOU POSE YOUR QUESTIONS ABOUT JUDAISM TO AI? Just like Hillel’s student, we all have complex questions that we want answered as simply as possible.
BABYLONIAN TALMUD, AVODAH ZARAH 7A
The Sages taught: In the case of one who asks a ques- tion of a Sage with regard to an issue of ritual impuri- ty and the Sage rules that the item is impure, he may not ask the same question of another Sage and have him rule that it is pure. Similarly, in the case of one who asks a Sage a halakhic question and he deems it forbidden, he may not ask the question of another Sage and have him deem it permitted. In a situation where there were two Sages sit- ting together and one deems an item impure and the other one deems it pure, or if one deems it pro- hibited and the other one deems it permitted, the questioner should proceed as follows: if one of the Sages was superior to the other in wisdom and [his view was shared by a majority], one should follow his ruling, and if not, he should follow the one who rules stringently. 1 1 THE TALMUD WAS WELL AWARE that, by the time of its redaction in the sixth century or so, there were many competing halakhic opinions circulating in society at large, as well as in the study halls where debates were taking place. A system was required for assessing what one should do in the face of this plethora of perspectives. And so Tal- mudic rabbis arrived at one major litmus test: the author- ity of the person issuing an opinion is crucial. Whom you ask matters. On this view, someone becomes authoritative on the strength of their lineage—whom they learned from—as well as demonstrated facility and virtuosity in their think- ing. All of this has a great impact on how we rely on AI. From where does a chatbot derive its authority? Has it been trained on authoritative texts? (We don’t really know; all chatbots are proprietary, and the companies that create them do not generally release information about the data they have been trained on.) Has it demonstrated repeat- ed moments of brilliant legal analysis, or at least a con- sistent pattern of citing sources accurately, and without hallucination?
28 WINTER 2025/2026
Made with FlippingBook Digital Proposal Creator