Most state-of-the-art approaches to action recognition rely on global representations either by concatenating local information in a long descriptor vector or by computing a single location independent histogram. This limits their performance in presence of occlusions and when running on multiple viewpoints. We propose a novel approach to providing robustness to both occlusions and viewpoint changes that yields signiﬁcant improvements over existing techniques. At its heart is a local partitioning and hierarchical classiﬁcation of the 3D Histogram of Oriented Gradients (HOG) descriptor to represent sequences of images that have been concatenated into a data volume. We achieve robustness to occlusions and viewpoint changes by combining training data from all viewpoints to train classiﬁers that estimate action labels independently over sets of HOG blocks. A top level classiﬁer combines these local labels into a global action class decision.