Misplaced Pages

Unique negative dimension

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

Unique negative dimension (UND) is a complexity measure for the model of learning from positive examples. The unique negative dimension of a class C {\displaystyle C} of concepts is the size of the maximum subclass D C {\displaystyle D\subseteq C} such that for every concept c D {\displaystyle c\in D} , we have ( D { c } ) c {\displaystyle \cap (D\setminus \{c\})\setminus c} is nonempty.

This concept was originally proposed by M. Gereb-Graus in "Complexity of learning from one-side examples", Technical Report TR-20-89, Harvard University Division of Engineering and Applied Science, 1989.

See also

References

  1. Darnstädt, Malte; Simon, Hans Ulrich; Szörényi, Balázs (30 January 2014). "Supervised learning and Co-training". Theoretical Computer Science. 519: 68–87. doi:10.1016/j.tcs.2013.09.020.
  2. Geréb-Graus, Mihály (1989). Lower bounds on parallel, distributed and automata computations (Thesis). OCLC 1243704701. OSTI 5815133. TR-20-89.
  3. Ehrenfeucht, Andrzej; Haussler, David; Kearns, Michael; Valiant, Leslie (1 September 1989). "A general lower bound on the number of examples needed for learning". Information and Computation. 82 (3): 247–261. doi:10.1016/0890-5401(89)90002-3. S2CID 1925579.


Stub icon

This artificial intelligence-related article is a stub. You can help Misplaced Pages by expanding it.

Categories: