Fascinating. I would make one distinction, which is that the need for a pesak from a human rav is not just to maintain the personal relationship, but to address any personal aspect that might be in play. Although Rabbi Breitowitz doesn't mention it, a rav or beis din can take individual considerations (family history, finances, community needs, etc.) into account, and thus render a different pesak to the same question posed by different people. There's no mystery or contradiction here -- it's in keeping with the Torah's goal of "justice" (and also peace). It's about overall, ultimate fairness, not precision. Members of the Sanhedrin had to be married to ensure the requisite compassion when judging capital cases. And why should this be? So we can be more like God. "Just as He is compassionate and merciful, so you should be compassionate and merciful.” A robot might be able to tell you what the Shulchan Aruch says about a drop of milk in a pot of meat. But it can't declare an agunah free to marry. And if it starts answering the former, it won't be long till it starts fielding questions on the latter. And then we're all toast.
>" a rav or beis din can take individual considerations (family history, finances, community needs, etc.) into account, and thus render a different pesak to the same question posed by different people. "
I deal with this in the article. I wrote:
>"Regarding the aspect of personalization: One can provide ChatGPT with the same information that one gives a rabbi, and ChatGPT can already, or will soon be able to, precisely emulate how an average rabbi would respond. The natural pushback on this would be that a rabbi knows you, given an existing relationship. The response to this would be two-fold: First of all, many top poskim give responses to people that they have no existing relationship with. And second of all, one can have an existing dialogue with an AI, allowing the AI to give personalized responses based on the history, in the same way that a shul rabbi would."
Apologies, I missed that. But i don't think it's complete. It's not only about the history, it can also be about other extrinsic factors (community standards, finances, etc.). In addition, a consistent reason given by the Torah for the importance of personalization is compassion. No amount of historical data can provide a robot with that.
> "It's not only about the history, it can also be about other extrinsic factors (community standards, finances, etc.)."
An AI can know all that as well, based on public data. I'll grant you that a rabbi will be privy to info and standards that aren't publicly known.
>" In addition, a consistent reason given by the Torah for the importance of personalization is compassion. No amount of historical data can provide a robot with that."
Compassion is a meta-halachic aspect, that has been extensively formalized in recent research. See these popular articles, including from the Cardozo Academy!:
This is part of a broader question of human vs algorithm in courts. See on this Kahaneman's "Noise", that human judgments have a shockingly high lever of "noise", meaning that they are essentially arbitrary, and have no basis in the situation of the judged, rather in the judge himself: https://www.amazon.com/Noise-Human-Judgment-Daniel-Kahneman/dp/0316451401
Specifically re compassion, while the emotion can't be automated, the practical outcome of compassion can be formulated with an algorithm. To explain further:
Wikipedia says: "Compassion involves allowing ourselves to be moved by suffering, and experiencing the motivation to help alleviate and prevent it." (https://en.wikipedia.org/wiki/Compassion)
So the algorithm would put weight on preventing human suffering.
As for suffering, and its opposite: pleasure or happiness. These are ideas that have been discussed by philosophers for thousands of years. For recent work on formalizing them, see for example Sam Harris's book Moral Landscape:
Fascinating. I would make one distinction, which is that the need for a pesak from a human rav is not just to maintain the personal relationship, but to address any personal aspect that might be in play. Although Rabbi Breitowitz doesn't mention it, a rav or beis din can take individual considerations (family history, finances, community needs, etc.) into account, and thus render a different pesak to the same question posed by different people. There's no mystery or contradiction here -- it's in keeping with the Torah's goal of "justice" (and also peace). It's about overall, ultimate fairness, not precision. Members of the Sanhedrin had to be married to ensure the requisite compassion when judging capital cases. And why should this be? So we can be more like God. "Just as He is compassionate and merciful, so you should be compassionate and merciful.” A robot might be able to tell you what the Shulchan Aruch says about a drop of milk in a pot of meat. But it can't declare an agunah free to marry. And if it starts answering the former, it won't be long till it starts fielding questions on the latter. And then we're all toast.
You write:
>" a rav or beis din can take individual considerations (family history, finances, community needs, etc.) into account, and thus render a different pesak to the same question posed by different people. "
I deal with this in the article. I wrote:
>"Regarding the aspect of personalization: One can provide ChatGPT with the same information that one gives a rabbi, and ChatGPT can already, or will soon be able to, precisely emulate how an average rabbi would respond. The natural pushback on this would be that a rabbi knows you, given an existing relationship. The response to this would be two-fold: First of all, many top poskim give responses to people that they have no existing relationship with. And second of all, one can have an existing dialogue with an AI, allowing the AI to give personalized responses based on the history, in the same way that a shul rabbi would."
Apologies, I missed that. But i don't think it's complete. It's not only about the history, it can also be about other extrinsic factors (community standards, finances, etc.). In addition, a consistent reason given by the Torah for the importance of personalization is compassion. No amount of historical data can provide a robot with that.
> "It's not only about the history, it can also be about other extrinsic factors (community standards, finances, etc.)."
An AI can know all that as well, based on public data. I'll grant you that a rabbi will be privy to info and standards that aren't publicly known.
>" In addition, a consistent reason given by the Torah for the importance of personalization is compassion. No amount of historical data can provide a robot with that."
Compassion is a meta-halachic aspect, that has been extensively formalized in recent research. See these popular articles, including from the Cardozo Academy!:
"Truth, Compromise, and Meta-Halakhah" (https://www.cardozoacademy.org/reflections/truth-compromise-and-meta-halakhah/)
"Halakhic Response to Meta-Halakhic Values" (https://www.jewishideas.org/article/halakhic-response-meta-halakhic-values)
And these academic sources:
"Eliezer Goldman and the Origins of Meta-Halacha"(https://academic.oup.com/mj/article-abstract/34/3/309/975916)
"Halakhah, Meta-Halakhah and Philosophy" (https://www.magnespress.co.il/en/book/Halakhah,_Meta-Halakhah_and_Philosophy-5463)
This is part of a broader question of human vs algorithm in courts. See on this Kahaneman's "Noise", that human judgments have a shockingly high lever of "noise", meaning that they are essentially arbitrary, and have no basis in the situation of the judged, rather in the judge himself: https://www.amazon.com/Noise-Human-Judgment-Daniel-Kahneman/dp/0316451401
Specifically re compassion, while the emotion can't be automated, the practical outcome of compassion can be formulated with an algorithm. To explain further:
Wikipedia says: "Compassion involves allowing ourselves to be moved by suffering, and experiencing the motivation to help alleviate and prevent it." (https://en.wikipedia.org/wiki/Compassion)
So the algorithm would put weight on preventing human suffering.
As for suffering, and its opposite: pleasure or happiness. These are ideas that have been discussed by philosophers for thousands of years. For recent work on formalizing them, see for example Sam Harris's book Moral Landscape:
https://www.amazon.com/Moral-Landscape-Science-Determine-Values/dp/143917122X