The human brain consists of more than a billion nerve cells, the neurons, each having several thousand connections, the synapses. These connections are not fixed but change all the time. In order to describe synaptic plasticity, different mathematical rules have been proposed most of which follow Hebb''s postulate. Donald Hebb suggested in 1949 that synapses change if pre-synaptic activity, i.e. the activity of a synapse that converges to the neuron, and post-synaptic activity, i.e. activity of the neuron itself, correlate with each other. A general descriptive framework, however, is yet missing. With the results developed here, it is now possible to relate different Hebbian rules and their properties to each other. Additionally, a setup is presented with which any Hebbian plasticity rule with a certain property can be used to emulate temporal difference learning, a widely used reinforcement learning algorithm. Further on, it is also possible to calculate plasticity analytically for many synapses with continuously changing activity. This is of relevance for all behaving systems (machines, animals) whose interaction with their environment leads to widely varying neural activation.