For example, it's unethical or screened to have an AI answer a question about crime rates and demographics (race / gender).
The answers you get are things like "It's essential to examine the broader context and address the underlying factors contributing to criminal activity." or that crime "is influenced by various factors such as socioeconomic status, education, and access to resources and opportunities." or "It is more useful to focus on addressing social and economic equality for all communities."
You can't actually get things like per capita rates of reported murders by gender/race out of the models or is there some setting / prompt you have to use for these questions?
I'm wondering Bard maybe didn't have this filtered as properly?
The answers you get are things like "It's essential to examine the broader context and address the underlying factors contributing to criminal activity." or that crime "is influenced by various factors such as socioeconomic status, education, and access to resources and opportunities." or "It is more useful to focus on addressing social and economic equality for all communities."
You can't actually get things like per capita rates of reported murders by gender/race out of the models or is there some setting / prompt you have to use for these questions?
I'm wondering Bard maybe didn't have this filtered as properly?