The Racing Rules of Sailing |
1778 Posts
|
|
Rule 18 and Room at the Mark |
171 Posts
|
|
Protest Committee & Hearing Procedures |
106 Posts
|
|
Match and Team Racing Rules |
46 Posts
|
|
Race Committee & Race Management |
76 Posts
|
|
Rules 2 and 69 |
45 Posts
|
|
Training Materials, Presentations and Classes |
58 Posts
|
|
Share Your SI/NOR Language |
48 Posts
|
|
Event Management & Forum System (Q's, Comments & Suggestions) |
170 Posts
|
|
繁體中文論壇 - Traditional Chinese Channel |
2 Posts
|
|
Regole e dintorni - Italian Channel |
50 Posts
|
In thinking this through, I have a few questions.
- How is coaching by AI different than coaching by a competitor's coach, his team mates, a friend, Dave Perry, etc.
- How does coaching "skew" testimony? I am of the belief that coaching generally makes for better and more efficient hearings. If a competitor knows what the elements are for a claim, then they are better able to craft their argument to achieve their goal. How is that bad?
- If coaching increases the likelihood of false testimony (and I know of no metric that supports this), why would AI be any different than coaching from any other source?
- If the application offers AI coaching to both parties, and preserves that coaching for the parties and judges to see, why would this not lead to more concise hearings and better understanding.
- The PC is still tasked with determining how to weight the evidence. If a competitor is providing information that the members of the panel find less convincing than other evidence, they should weight it accordingly. Thus, even if a competitor is using AI to craft testimony, if it is true, what is the problem? If it is not true, then it would be treated like any other evidence and weighted by the panel.
- If the application provides coaching and provides the panel with access to see that coaching, isn't that better than not knowing whether a competitor has been coached and what they were told?