{"id":2623,"date":"2022-10-25T15:46:20","date_gmt":"2022-10-25T15:46:20","guid":{"rendered":"https:\/\/www.arg.tech\/?page_id=2623"},"modified":"2022-10-26T12:21:27","modified_gmt":"2022-10-26T12:21:27","slug":"dialogical-fingerprinting","status":"publish","type":"page","link":"https:\/\/www.arg.tech\/index.php\/dialogical-fingerprinting\/","title":{"rendered":"Dialogical Fingerprinting"},"content":{"rendered":"\n<p><strong>Duration: <\/strong>March &#8211; October 2019<br><strong>People:<\/strong> Chris Reed (PI), Jacky Visser (PDRA), Matt Foulis (research intern)<br><strong>Funder: <\/strong><a rel=\"noreferrer noopener\" href=\"https:\/\/www.gov.uk\/government\/organisations\/defence-science-and-technology-laboratory\" target=\"_blank\">Dstl<\/a> \/ <a rel=\"noreferrer noopener\" href=\"https:\/\/www.gov.uk\/government\/organisations\/defence-and-security-accelerator\" target=\"_blank\">DASA<\/a><\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Project Description<\/h3>\n\n\n\n<p>How people engage with dialogue is as unique to them as their fingerprint. We show that this idea can be operationalised using state-of-the-art deep learning models. Who is speaking can be determined by how they interact; their role and attitude from the language they use.<\/p>\n\n\n\n<p>Our demonstrator system for the algorithms underpinning the Dstl-DASA Behavioural Analytics project on Dialogical Fingerprinting provides an intuitive interface to a range of data sets; of classical and deep learning AI algorithms; and of output characteristics, including speaker identity, dialogical role, emotional status and political alignment.<\/p>\n\n\n\n<p>Select a machine learning algorithm, select the features to use, select the training data and select what properties to look for. Then lock and learn. Deep learning algorithms construct the model which is then applied to test data: an episode of BBC Radio 4\u2019s Moral Maze. As playback continues, the model makes increasingly confident predictions about who\u2019s who.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"767\" height=\"1024\" src=\"https:\/\/arg-tech.org\/wp-content\/uploads\/2022\/10\/thumbnail_IMG_2659-767x1024.jpg\" alt=\"\" class=\"wp-image-2633\" srcset=\"https:\/\/www.arg.tech\/wp-content\/uploads\/2022\/10\/thumbnail_IMG_2659-767x1024.jpg 767w, https:\/\/www.arg.tech\/wp-content\/uploads\/2022\/10\/thumbnail_IMG_2659-225x300.jpg 225w, https:\/\/www.arg.tech\/wp-content\/uploads\/2022\/10\/thumbnail_IMG_2659-768x1025.jpg 768w, https:\/\/www.arg.tech\/wp-content\/uploads\/2022\/10\/thumbnail_IMG_2659-112x150.jpg 112w, https:\/\/www.arg.tech\/wp-content\/uploads\/2022\/10\/thumbnail_IMG_2659-49x65.jpg 49w, https:\/\/www.arg.tech\/wp-content\/uploads\/2022\/10\/thumbnail_IMG_2659.jpg 959w\" sizes=\"auto, (max-width: 767px) 100vw, 767px\" \/><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\">Key publications<\/h3>\n\n\n\n<p style=\"font-size:12px\">Foulis, M., Visser, J., &amp; Reed, C. (2020a). <a rel=\"noreferrer noopener\" href=\"https:\/\/discovery.dundee.ac.uk\/ws\/portalfiles\/portal\/52034786\/FAIA_326_FAIA200536.pdf\" target=\"_blank\">Dialogical fingerprinting of debaters<\/a>. In, H. Prakken, S. Bistarelli, F. Santini &amp; C. Taticchi (Eds.), Proceedings of COMMA 2020, 8-11 September 2020 (pp. 465-466). Amsterdam: IOS Press. DOI: 10.3233\/FAIA200536<\/p>\n\n\n\n<p style=\"font-size:12px\">Foulis, M., Visser, J., &amp; Reed, C. (2020b). <a rel=\"noreferrer noopener\" href=\"https:\/\/argvis-workshop.lingvis.io\/pdfs\/ArgVis2020_paper_3.pdf\" target=\"_blank\">Interactive visualisation of debater identification and characteristics<\/a>. In, F. Sperrle, M. El-Assady, B. Pl\u00fcss, R. Duthie &amp; A. Hautli-Janisz (Eds.), Proceedings of the COMMA workshop on Argument Visualisation, COMMA, 8 September 2020 (pp. 1-7)<\/p>\n\n\n\n<p class=\"has-small-font-size\"><\/p>\n\n\n\n<figure class=\"wp-block-embed-youtube aligncenter wp-block-embed is-type-video is-provider-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"Dialogical Fingerprinting of Debaters\" width=\"625\" height=\"352\" src=\"https:\/\/www.youtube.com\/embed\/wlPaJ1PxE5c?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n\n\n\n<figure class=\"wp-block-embed-youtube aligncenter wp-block-embed is-type-video is-provider-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"Dialogical Fingerprinting: Recognising the unique way you converse\" width=\"625\" height=\"352\" src=\"https:\/\/www.youtube.com\/embed\/O2APn1VeJeY?start=176&#038;feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n","protected":false},"excerpt":{"rendered":"<p>Duration: March &#8211; October 2019People: Chris Reed (PI), Jacky Visser (PDRA), Matt Foulis (research intern)Funder: Dstl \/ DASA Project Description How people engage with dialogue is as unique to them as their fingerprint. We show that this idea can be operationalised using state-of-the-art deep learning models. Who is speaking can be determined by how they [&hellip;]<\/p>\n","protected":false},"author":12,"featured_media":2634,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-2623","page","type-page","status-publish","has-post-thumbnail","hentry"],"_links":{"self":[{"href":"https:\/\/www.arg.tech\/index.php\/wp-json\/wp\/v2\/pages\/2623","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.arg.tech\/index.php\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/www.arg.tech\/index.php\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/www.arg.tech\/index.php\/wp-json\/wp\/v2\/users\/12"}],"replies":[{"embeddable":true,"href":"https:\/\/www.arg.tech\/index.php\/wp-json\/wp\/v2\/comments?post=2623"}],"version-history":[{"count":9,"href":"https:\/\/www.arg.tech\/index.php\/wp-json\/wp\/v2\/pages\/2623\/revisions"}],"predecessor-version":[{"id":2635,"href":"https:\/\/www.arg.tech\/index.php\/wp-json\/wp\/v2\/pages\/2623\/revisions\/2635"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.arg.tech\/index.php\/wp-json\/wp\/v2\/media\/2634"}],"wp:attachment":[{"href":"https:\/\/www.arg.tech\/index.php\/wp-json\/wp\/v2\/media?parent=2623"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}