BEGIN:VCALENDAR
VERSION:2.0
PRODID:IEEE vTools.Events//EN
CALSCALE:GREGORIAN
BEGIN:VTIMEZONE
TZID:Turkey
BEGIN:DAYLIGHT
DTSTART:20380119T061407
TZOFFSETFROM:+0300
TZOFFSETTO:+0300
RRULE:FREQ=YEARLY;BYDAY=3TU;BYMONTH=1
TZNAME:+03
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20160907T000000
TZOFFSETFROM:+0300
TZOFFSETTO:+0300
RRULE:FREQ=YEARLY;BYDAY=1WE;BYMONTH=9
TZNAME:+03
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20200222T131838Z
UID:C3FCB5EB-CF2D-4B4C-A727-C42B6360A951
DTSTART;TZID=Turkey:20200221T133000
DTEND;TZID=Turkey:20200221T153000
DESCRIPTION:21 February 2020 (13:40): IEEE AP/MTT/EMC/ED Turkey Seminar Ser
 ies (S.65)\n\nSpeaker: Prof. Pınar Duygulu Şahin\, Hacettepe University\
 n\nTopic: &quot;Recognizing and Transferring the Styles of Artists Who Illustra
 te Children’s Books&quot;\n\nLocation: Middle East Technical University\, Ank
 ara\, Turkey\n\nAbstract: In this talk\, I will present our recent works t
 o explore illustrations in children’s books as a new domain in classific
 ation of artists and unpaired image-to-image translation. Our work is moti
 vated from a young boy’s capability to recognize an illustrator’s styl
 e in a totally different context. The boy’s enthusiasm let us to start t
 he journey to explore the capabilities of machines to recognize the style 
 of illustrators.\n\nFirst\, we collected pages from children’s books to 
 construct a new illustrations dataset consisting of about 9500 pages from 
 24 artists. We exploited deep networks for categorizing illustrators and w
 ith around 94% classification performance our method over-performed the tr
 aditional methods by more than 10%.\n\nGoing beyond categorization we expl
 ored transferring style. We show that although the current state-of-the-ar
 t image-to-image translation models successfully transfer either the style
  or the content\, they fail to transfer both at the same time. We propose 
 a new generator network to address this issue and show that the resulting 
 network strikes a better balance between style and content.\n\nThere are n
 o well-defined or agreed-upon evaluation metrics for unpaired image-to-ima
 ge translation. So far\, the success of image translation models has been 
 based on subjective\, qualitative visual comparison on a limited number of
  images. To address this problem\, we propose a new framework for the quan
 titative evaluation of image-to-illustration models\, where both content a
 nd style are taken into account using separate classifiers. In this new ev
 aluation framework\, our proposed model performs better than the current s
 tate-of-the-art models on the illustrations dataset.\n\nBio: Pınar Duygul
 u has received her BSc\, MSc and PhD degrees from Department of Computer E
 ngineering at Middle East Technical University\, Ankara\, Turkey in 1996\,
  1998 and 2003 respectively. During her PhD\, she was a visiting scholar a
 t University of California at Berkeley under the supervision of Prof. Davi
 d Forsyth. After being a post-doctoral researcher at Informadia Project at
  Carnegie Mellon University\, she joined to Department of Computer Enginee
 ring at Bilkent University\, Ankara\, Turkey in 2004. During 2014 and 2015
  she was at Carnegie Mellon University as a research associate. Currently\
 , she is a faculty member at Department of Computer Engineering at Hacette
 pe University\, Ankara\, Turkey. She received Science Academy’s Young Sc
 ientist Award (BAGEP) in 2015\, Fulbright scholarship in 2013\, TUBITAK Ca
 reer award in 2005\, and the best paper in Cognitive Vision award at Europ
 ean Conference on Computer Vision in 2002. Her current research interests 
 include computer vision and multimedia data mining\, specifically object\,
  face and action recognition in large image and video collections and anal
 ysis of historical documents.\n\nSpeaker(s): Prof. Pinar Duygulu Sahin\, \
 n\nAnkara\, Ankara\, Türkiye
LOCATION:Ankara\, Ankara\, Türkiye
ORGANIZER:ozergul@metu.edu.tr
SEQUENCE:0
SUMMARY:IEEE AP/MTT/EMC/ED TURKEY CHAPTER SEMINAR SERIES -- SEMINAR 65
URL;VALUE=URI:https://events.vtools.ieee.org/m/224527
X-ALT-DESC:Description: &lt;br /&gt;&lt;p&gt;&lt;strong&gt;21 February 2020 (13:40): &amp;nbsp\;I
 EEE AP/MTT/EMC/ED Turkey Seminar Series (S.65)&lt;/strong&gt;&lt;/p&gt;\n&lt;p&gt;Speaker: P
 rof. Pınar Duygulu Şahin\, Hacettepe University&lt;/p&gt;\n&lt;p&gt;Topic: &quot;Recogniz
 ing and Transferring the Styles of Artists Who Illustrate Children&amp;rsquo\;
 s Books&quot;&lt;/p&gt;\n&lt;p&gt;Location:&amp;nbsp\;Middle East Technical University\, Ankara
 \, Turkey&lt;/p&gt;\n&lt;p&gt;Abstract: In this talk\, I will present our recent works
  to explore illustrations in children&amp;rsquo\;s books as a new domain in cl
 assification of artists and unpaired image-to-image translation. Our work 
 is motivated from a young boy&amp;rsquo\;s capability to recognize an illustra
 tor&amp;rsquo\;s style in a totally different context. The boy&amp;rsquo\;s enthus
 iasm let us to start the journey to explore the capabilities of machines t
 o recognize the style of illustrators.&lt;/p&gt;\n&lt;p&gt;First\, we collected pages 
 from children&amp;rsquo\;s books to construct a new illustrations dataset cons
 isting of about 9500 pages from 24 artists. We exploited deep networks for
  categorizing illustrators and with around 94% classification performance 
 our method over-performed the traditional methods by more than 10%.&lt;/p&gt;\n&lt;
 p&gt;Going beyond categorization we explored transferring style.&amp;nbsp\;&amp;nbsp\
 ;We show that although the current state-of-the-art image-to-image transla
 tion models successfully transfer either the style or the content\, they f
 ail to transfer both at the same time. We propose a new generator network 
 to address this issue and show that the resulting network strikes a better
  balance between style and content.&lt;/p&gt;\n&lt;p&gt;There are no well-defined or a
 greed-upon evaluation metrics for unpaired image-to-image translation. So 
 far\, the success of image translation models has been based on subjective
 \, qualitative visual comparison on a limited number of images. To address
  this problem\, we propose a new framework for the quantitative evaluation
  of image-to-illustration models\, where both content and style are taken 
 into account using separate classifiers. In this new evaluation framework\
 , our proposed model performs better than the current state-of-the-art mod
 els on the illustrations dataset.&lt;/p&gt;\n&lt;p&gt;Bio: Pınar Duygulu has received
  her BSc\, MSc and PhD degrees from Department of Computer Engineering at 
 Middle East Technical University\, Ankara\, Turkey in 1996\, 1998 and 2003
  respectively. During her PhD\, she was a visiting scholar at University o
 f California at Berkeley under the supervision of Prof. David Forsyth. Aft
 er being a post-doctoral researcher at Informadia Project at Carnegie Mell
 on University\, she joined to Department of Computer Engineering at Bilken
 t University\, Ankara\, Turkey in 2004. During 2014 and 2015 she was at Ca
 rnegie Mellon University as a research associate. Currently\, she is a fac
 ulty member at Department of Computer Engineering at Hacettepe University\
 , Ankara\, Turkey. She received Science Academy&amp;rsquo\;s Young Scientist A
 ward (BAGEP) in 2015\, Fulbright scholarship in 2013\, TUBITAK Career awar
 d in 2005\, and the best paper in Cognitive Vision award at European Confe
 rence on Computer Vision in 2002. Her current research interests include c
 omputer vision and multimedia data mining\, specifically object\, face and
  action recognition in large image and video collections and analysis of h
 istorical documents.&lt;/p&gt;
END:VEVENT
END:VCALENDAR

