Misplaced Pages

Feature hashing

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

This is an old revision of this page, as edited by Qwertyus (talk | contribs) at 13:19, 6 February 2014 (Motivating example: Heaps' law). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Revision as of 13:19, 6 February 2014 by Qwertyus (talk | contribs) (Motivating example: Heaps' law)(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

In machine learning, feature hashing, also known as the hashing trick (by analogy to the kernel trick), is a fast and space-efficient way of vectorizing features, i.e. turning arbitrary features into indices in a vector or matrix. It works by applying a hash function to the features and using their hash values as indices directly, rather than looking the indices up in an associative array.

Motivating example

In a typical document classification task, the input to the machine learning algorithm (both during learning and classification) is free text. From this, a bag of words (BOW) representation is constructed: the individual tokens are extracted and counted, and each distinct token in the training set defines a feature (dependent variable) of each of the documents in both the training and test sets.

Machine learning algorithms, however, are typically defined in terms of numerical vectors. Therefore, the bags of words for a set of documents is regarded as a term-document matrix where each row is a single document, and each column is a single feature/word; the entry i, j in such a matrix captures the frequency (or weight) of the j'th term of the vocabulary in document i. (An alternative convention swaps the rows and columns of the term-document matrix, but this difference is immaterial.) Typically, these vectors are extremely sparse.

The common approach is to construct, at learning time or prior to that, a dictionary representation of the vocabulary of the training set, and use that to map words to indices. Hash tables and tries are common candidates for dictionary implementation. E.g., the texts

John likes to watch movies. Mary likes movies too.
John also likes to watch football games.
An apple a day keeps the doctor away.

can be converted, using the dictionary (in Python notation):

{"John": 1, "likes": 2, "to": 3, "watch": 4, "movies": 5, "also": 6, "football": 7, "games": 8, "Mary": 9, "too": 10,
 "an": 11, "apple": 12, "a": 13, "day": 14, "keeps": 15, "the": 16, "doctor": 17, "away": 18}

to the matrix

[ 1 2 1 1 2 0 0 0 1 1 0 0 0 0 0 0 0 0 1 1 1 1 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 ] {\displaystyle {\begin{bmatrix}1&2&1&1&2&0&0&0&1&1&0&0&0&0&0&0&0&0\\1&1&1&1&0&1&1&1&0&0&0&0&0&0&0&0&0&0\\0&0&0&0&0&0&0&0&0&0&1&1&1&1&1&1&1&1\end{bmatrix}}}

(Punctuation was removed, as is usual in document classification and clustering.)

The problem with this process is that such dictionaries take up a large amount of storage space, and grow in size as the training set grows. This can be seen in the above example: the third "document" has no terms in common with the other two, and therefore extends the vocabulary with its eight distinct words. Moreover, when the vocabulary is kept fixed, an adversary may try to invent new words or misspellings that are not in the stored vocabulary so as to circumvent a machine learned filter; this is why feature hashing has been tried for spam filtering at Yahoo! Research.

Note that the hashing trick isn't limited to text classification and similar tasks at the document level, but can be applied to any problem that involves large (perhaps unbounded) amounts of features.

Feature vectorization using the hashing trick

Instead of maintaining a dictionary, a feature vectorizer that uses the hashing trick can build a vector of a pre-defined length by applying a hash function h to the features (e.g., words) in the items under consideration, then using the hash values directly as feature indices and updating the resulting vector at those indices:

 function hashing_vectorizer(features : array of string, N : integer):
     x := new vector
     for f in features:
         h := hash(f)
         x += 1
     return x

It has been suggested that a second, single-bit output hash function ξ be used to determine the sign of the update value, to counter the effect of hash collisions. If such a hash function is used, the algorithm becomes

 function hashing_vectorizer(features : array of string, N : integer):
     x := new vector
     for f in features:
         h := hash(f)
         idx := h mod N
         if ξ(f) == 1:
             x += 1
         else:
             x -= 1
     return x

The above pseudocode actually converts each sample into a vector. An optimized version would instead only generate a stream of (h,ξ) pairs and let the learning and prediction algorithms consume such streams; a linear model can then be implemented as a single hash table representing the coefficient vector.

Properties

When a second hash function ξ is used to determine the sign of a feature's value, the expected mean of each column in the output array becomes zero because ξ causes some collisions to cancel out. E.g., suppose an input contains two symbolic features f₁ and f₂ that collide with each other, but not with any other features in the same input; then there are four possibilities which, if we make no assumptions about ξ, have equal probability:

ξ(f₁) ξ(f₂) Final value, ξ(f₁) + ξ(f₂)
-1 -1 -2
-1 1 0
1 -1 0
1 1 2

In this example, there is a 50% probability that the hash collision cancels out. Multiple hash functions can be used to further reduce the risk of collisions.

Furthermore, if φ is the transformation implemented by a hashing trick with a sign hash ξ (i.e. φ(x) is the feature vector produced for a sample x), then inner products in the hashed space are unbiased:

E [ φ ( x ) , φ ( x ) ] = x , x {\displaystyle \mathbb {E} =\langle x,x'\rangle }

where the expectation is taken over the hashing function φ. It can be verified that φ ( x ) , φ ( x ) {\displaystyle \langle \varphi (x),\varphi (x')\rangle } is a positive semi-definite kernel.

Extensions and variations

Recent work extends the hashing trick to supervised mappings from words to indices, which are explicitly learned to avoid collisions of important terms.

Applications and practical performance

Ganchev and Dredze showed that in text classification applications with random hash functions and several tens of thousands of columns in the output vectors, feature hashing need not have an adverse effect on classification performance, even without the signed hash function. Weinberger et al. applied their variant of hashing to the problem of spam filtering, formulating this as a multi-task learning problem where the input features are pairs (user, feature) so that a single parameter vector captured per-user spam filters as well as a global filter for several hundred thousand users, and found that the accuracy of the filter went up.

Implementations

Implementations of the hashing trick are present in:

See also

References

  1. ^ Kilian Weinberger; Anirban Dasgupta; John Langford; Alex Smola; Josh Attenberg (2009). Feature Hashing for Large Scale Multitask Learning (PDF). Proc. ICML.
  2. ^ K. Ganchev; M. Dredze (2008). Small statistical models by random feature mixing (PDF). Proc. ACL08 HLT Workshop on Mobile Language Processing.
  3. Josh Attenberg; Kilian Weinberger; Alex Smola; Anirban Dasgupta; Martin Zinkevich (2009). "Collaborative spam filtering with the hashing trick". Virus Bulletin.
  4. ^ Owen, Sean; Anil, Robin; Dunning, Ted; Friedman, Ellen (2012). Mahout in Action. Manning. pp. 261–265.
  5. Shi, Q. (2009). Hash Kernels. AISTATS. {{cite conference}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
  6. Bai, B. (2009). Supervised semantic indexing (PDF). CIKM. pp. 187–196. {{cite conference}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)

External links

Categories: