This research deals with human communicative behaviour related to feedback, analysed across languages (Italian and Swedish), modalities (auditory versus visual) and different communicative situations (human-human versus human-machine dialogues). The aim is to give more insight into how humans express feedback and at the same time suggest a method to collect valuable data that can be useful to control facial and head movements related to visual feedback in synthetic conversational agents. The analysed data span from spontaneous conversations video-recorded in real communicative situations, and semi spontaneous dialogues obtained with different eliciting techniques, to a specific corpus of controlled interactive speech collected by means of a motion capture system. A specific coding scheme has been developed, tested and used to annotate feedback with the support of different available software packages for audio visual analysis. This study should be especially useful to professionals in Speech Communication and Speech Technology, but even Psychologists studying human behavior in human-human and human-machine interactions might find it interesting.