-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathindex.html
More file actions
710 lines (681 loc) · 33.8 KB
/
index.html
File metadata and controls
710 lines (681 loc) · 33.8 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
<!DOCTYPE HTML>
<html lang="en">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>Yoonhwa (Yuna) Jung</title>
<meta name="author" content="Yuna Jung">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="shortcut icon" href="images/favicon/org_ico.ico" type="image/x-icon">
<link rel="stylesheet" type="text/css" href="stylesheet_final.css">
</head>
<body>
<div class="container">
<div>
<h1>Yoonhwa (Yuna) Jung</h1>
<p>
I am an Assistant Professor at Louisiana State University in the Department of Construction Management. I received my PhD in CEE and Master in CS at the University of Illinois at Urbana-Champaign</a> (UIUC) advised by Prof. <a href="https://cee.illinois.edu/directory/profile/mgolpar">Mani Golparvar-Fard</a> and <a href="https://cs.illinois.edu/about/people/faculty/juliahmr">Julia Hockenmaier</a>. I also worked at <a href="https://www.documentcrunch.com/">Document Crunch</a> AI R&D team.
</p>
<div class="title">
<p>
My research focuses on on advancing informatics and AI for smart, resilient, and sustainable infrastructure, Key areas include:
</p>
<ul class="sub-list">
<li> Natural Language Processing (NLP) and Computer Vision to analyze construction best practices, generating actionable insights for automated project planning and controls; </li>
<li> leveraging synergies between lean construction, BIM and Reality modeling for construction production management;</li>
<li> enhancing reliability of generative AI models and creating foundational multimodal models for proactive construction management systems </li>
</ul>
</div>
<p>
Simultaneously, my contributions extend to efficiency and effectiveness in various general NLP tasks, where I proposed novel approaches to resolve general problems, yielding a paper accepted to EMNLP 2023 Findings and preparing three papers toward top-tier AI/ML conferences.
</p>
</div>
<div class="image-container">
<a href="images/fullphoto.JPG">
<div class="profile-photo"></div>
</a>
<p>
<a href="mailto:yoonhwa.jung@lsu.edu">Email</a> /
<a href="https://www.linkedin.com/in/yoonhwa-jung">LinkedIn</a> /
<a href="https://scholar.google.com/citations?user=3ucRFYgAAAAJ&hl=en&oi=ao">Scholar</a> /
</p>
</div>
</div>
<div class="navbar">
<a href="#education">Education</a>
<a href="#work">Work Experience</a>
<a href="#awards">Awards & Scholarship</a>
<a href="#research">Research</a>
<a href="#teaching">Teaching</a>
<a href="#reviewer">Services</a>
<a href="#affiliations">Affiliations</a>
</div>
<div class="section" id="education">
<h2>Education</h2>
<div class="item">
<div class="title">
<p>University of Illinois Urbana-Champaign</p>
<ul class="sub-list">
<li>Ph.D. in Civil Engineering — AI in Construction</li>
<li>Master of Computer Science </li>
<li>Master of Science in Civil Engineering, Construction Eng. & Mgmt.</li>
</ul>
</div>
<div class="date">
<p>Aug 2019 - Aug 2025</p>
</div>
</div>
<div class="item">
<div class="title">
<p>Hanyang University, Seoul, South Korea</p>
<ul class="sub-list">
<li>B.S. in Civil Engineering</li>
<li>High Honors for Academic Achievement (Sep 2015)</li>
</ul>
</div>
<div class="date">
<p>Mar 2015 - Feb 2019</p>
</div>
</div>
</div>
<div class="section" id="work">
<h2>Work Experience</h2>
<div class="item">
<div class="title">
<p>Assistant Professor @ LSU</p>
<ul class="sub-list">
<li>Hiring two full-funded PhD students</li>
</ul>
</div>
<div class="date">
<p>Aug 2025 - Current</p>
</div>
</div>
<div class="item">
<div class="title">
<p>Machine Learning Intern, Document Crunch</p>
<ul class="sub-list">
<li>Recognized as an Inspired Team Memeber, Aug 2024</li>
<li>Improving reliability of AI chat service and analysis for consturction specifications and drawings</li>
</ul>
</div>
<div class="date">
<p>May 2024 - Aug 2025</p>
</div>
</div>
<div class="item">
<div class="title">
<p>Research and Teaching Assistant, University of Illinois at Urbana-Champaign</p>
<ul class="sub-list">
<li>NLP-based Schedule Analytics to Generate Actionable Insights for Project Planning and Controls</li>
<li>Taught guest lectures for CEE320 Construction Engineering and Management, Leader TA</li>
</ul>
</div>
<div class="date">
<p>Aug 2019 - May 2025</p>
</div>
</div>
</div>
<div class="section" id="awards">
<h2>Awards & Scholarships</h2>
<div class="item">
<div class="title">
<p><a href="https://www.weoneil.com/">William E. O'Neil</a> Award, UIUC</p>
</div>
<div class="date">
<p>2024 - 2025</p>
</div>
</div>
<div class="item">
<div class="title">
<p>Scholarship, <a href="https://www.kacepma.org/">Korean-American CEPM Association</a> and <a href="https://www.hmglobal.com/en/index.asp">HanmiGlobal</a></p>
</div>
<div class="date">
<p>2024</p>
</div>
</div>
<div class="item">
<div class="title">
<p>Grand prize, 4th Industrial Revolution Imagination Contest, Hanyang Univ</p>
</div>
<div class="date">
<p>2017</p>
</div>
</div>
<div class="item">
<div class="title">
<p>Encouragement Prize, Design Construction Competition, KSCE</p>
</div>
<div class="date">
<p>2016</p>
</div>
</div>
<div class="item">
<div class="title">
<p>4-Year Full Scholarship, POSCO</p>
</div>
<div class="date">
<p>2015 - 2018</p>
</div>
</div>
</div>
<div class="research" id="research">
<h2>Research</h2>
<p>
I'm interested in natural language processing, computer vision, multimodal learning, deep learning, generative AI, and human-computer interaction.
Most of <a href="https://scholar.google.com/citations?user=3ucRFYgAAAAJ&hl=en&oi=ao">my research</a> is about inferring the knowledge (i.e., best practices) of construction planning and controls from textual (e.g., schedules, daily reports, change orders, RFI, etc.) and visual (e.g., images, point clouds, etc.) information.
Representative papers are <span class="highlight">highlighted</span>.
</p>
<h3>Journal Papers</h3>
<div class="container2 highlight">
<div class="column-img">
<div class="video-container">
<video width="100%" height="100%" muted autoplay loop style="position: absolute; top: 0; left: 0; opacity: 1; z-index: -1;">
<img src="images/expertplanner.jpg" type="video/jpg">
Your browser does not support the video tag.
</video>
<img src="images/expertplanner.jpg" style="width: 100%; display: block;">
</div>
</div>
<div class="column-text">
<a href="https://doi.org/10.1016/j.aei.2026.104439">
<span class="papertitle">ExpertPlanner: A mixture-of-experts transformer language model for decomposing construction look-ahead plan tasks from long-term master schedules</span>
</a>
<br><br>
<strong>Yoonhwa Jung</strong>, Mani Golparvar-Fard
<br>
<em>Advanced Engineering Informatics</em>, 2026
<p>A new NLP transformer model, <span class="small-caps">ExpertPlanner</span>, to automatically generate granular lookahead tasks from a long-term master schedule. Expertplanner serves as a vital bridge between high-level project strategy and daily operational execution.</p>
</div>
</div>
<div class="container2">
<div class="column-img">
<div class="video-container">
<video width="100%" height="100%" muted autoplay loop style="position: absolute; top: 0; left: 0; opacity: 1; z-index: -1;">
<source src="images/VSD_demo.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
<img src="images/VSD2.jpg" style="width: 100%; display: block;">
</div>
</div>
<div class="column-text">
<a href="https://github.com/joonv2/VisualSiteDiary">
<span class="papertitle">VisualSiteDiary: A Detector-Free Vision-Language Transformer Model for Captioning Photologs for Daily Construction Reporting and Image Retrievals</span>
</a>
<br><br>
<strong>Yoonhwa Jung</strong>, Ikhyun Cho, Shun-Hsiang Hsu, Mani Golparvar-Fard
<br>
<em>Automation in Construction</em>, 2024
<br>
<a href="https://github.com/joonv2/VisualSiteDiary">project page</a> /
<a href="https://github.com/joonv2/VisualSiteDiary/blob/main/media/VSD_demo.gif">video</a> /
<a href="https://doi.org/10.1016/j.autcon.2024.105483">Journal paper</a>
<br><br>
<p>A Vision Transformer-based image captioning model, VisualSiteDiary, which creates human-readable captions for daily progress and work activity log, and enhances image retrieval tasks. Present a new image-caption pair dataset (VSD dataset) and Demo toward a real-time construction site daily log reporting. Superior-quality captions are generated compared to the state-of-the-art image captioning models.</p>
</div>
</div>
<div class="container2 highlight">
<div class="column-img">
<div class="video-container">
<video width="100%" height="100%" muted autoplay loop style="position: absolute; top: 0; left: 0; opacity: 1; z-index: -1;">
<img src="images/UniformatBridge_cover.jpg" type="video/jpg">
Your browser does not support the video tag.
</video>
<img src="images/UniformatBridge_cover.jpg" style="width: 100%; display: block;">
</div>
</div>
<div class="column-text">
<a href="https://doi.org/10.1016/j.autcon.2023.105183">
<span class="papertitle">UniformatBridge: Transformer language model for mapping construction schedule activities to uniformat categories</span>
</a>
<br><br>
<strong>Yoonhwa Jung</strong>, Julia Hochenmaier, Mani Golparvar-Fard
<br>
<em>Automation in Construction</em>, 2024
<br>
<a href="https://doi.org/10.1016/j.autcon.2023.105183">Journal paper</a> /
<a href="https://doi.org/10.1061/9780784485224.006">Conference paper</a>
<br><br>
<p>A new NLP transformer model, <span class="small-caps">UniformatBridge</span>, to automatically map schedule activities to ASTM UniFormat classes, embedding construction schedule sequencing knowledge. UniformatBridge serves as a universal identifier enabling automated creation of 4D BIMs and streamlines mapping between schedule, cost and payment application data.</p>
</div>
</div>
<div class="container2">
<div class="column-img">
<div class="video-container">
<video width="100%" height="100%" muted autoplay loop style="position: absolute; top: 0; left: 0; opacity: 1; z-index: -1;">
<source src="images/schedulerevision2.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
<img src="images/schedulerevision_cover.png" style="width: 100%; display: block;">
</div>
</div>
<div class="column-text">
<a href="https://doi.org/10.1016/j.autcon.2023.104896">
<span class="papertitle">Construction schedule augmentation with implicit dependency constraints and automated generation of lookahead plan revisions</span>
</a>
<br><br>
Fouad Amer, <strong>Yoonhwa Jung</strong>, Mani Golparvar-Fard
<br>
<em>Automation in Construction</em>, 2023
<br>
<a href="https://doi.org/10.1016/j.autcon.2023.104896">Journal paper</a>
<br><br>
<p>A NLP transformer method for deciphering implicit construction dependencies with the role and flexibility of activity relationships. A method for revising lookahead plans based on the flexibility of activity dependencies, mitigating the risk of delays.</p>
</div>
</div>
<div class="container2 highlight">
<div class="column-img">
<div class="video-container">
<img src="images/autoalign_flip1.png" data-hover="images/autoalign_flip2.png">
</div>
</div>
<div class="column-text">
<a href="https://doi.org/10.1016/j.autcon.2021.103929">
<span class="papertitle">Transformer machine learning language model for auto-alignment of long-term and short-term plans in construction</span>
</a>
<br><br>
Fouad Amer, <strong>Yoonhwa Jung</strong>, Mani Golparvar-Fard
<br>
<em>Automation in Construction</em>, 2021
<br>
<a href="https://doi.org/10.1016/j.autcon.2021.103929">Journal paper</a>
<br><br>
<p>This paper presents the first attempt to automate linking look-ahead planning tasks to master-schedule activities following an NLP-based multi-stage ranking formulation. Our model employs distance-based matching for candidate generation and a Transformer architecture for final matching.</p>
</div>
</div>
<div class="container2">
<div class="column-img">
<div class="video-container">
<video width="100%" height="100%" muted autoplay loop style="position: absolute; top: 0; left: 0; opacity: 1; z-index: -1;">
<img src="images/pvpowerpred.png">
Your browser does not support the video tag.
</video>
<img src="images/pvpowerpred.png" style="width: 100%; display: block;">
</div>
</div>
<div class="column-text">
<a href="https://doi.org/10.1016/j.jclepro.2019.119476">
<span class="papertitle">Long short-term memory recurrent neural network for modeling temporal patterns in long-term power forecasting for solar PV facilities: Case study of South Korea</span>
</a>
<br><br>
<strong>Yoonhwa Jung</strong>, Jaehoon Jung, Byungil Kim, SangUk Han
<br>
<em>Journal of Cleaner Production</em>, 2020
<br>
<a href="https://doi.org/10.1016/j.jclepro.2019.119476">Journal paper</a>
<br><br>
<p>An LSTM-RNN-based forecasting model is presented for investigation of PV sites. Time series data of spatial and meteorological conditions are considered and this work allows to search and evaluate suitable locations for PV plants in a wide area.</p>
<br><br>
<font color="red"><strong>*Cited over 130</strong></font>
</div>
</div>
<br>
<h3>Peer-reviewed Conference Papers</h3>
<div class="container2 highlight">
<div class="column-img">
<div class="video-container">
<video width="100%" height="100%" muted autoplay loop style="position: absolute; top: 0; left: 0; opacity: 1; z-index: -1;">
<img src="images/MAS.JPG" type="video/jpg">
Your browser does not support the video tag.
</video>
<img src="images/MAS.JPG" style="width: 100%; display: block;">
</div>
</div>
<div class="column-text">
<a href="">
<span class="papertitle">Self-Evaluation of Single and Multi-Agent LLMsas Assistants in Inspection Reporting Workflows</span>
</a>
<br><br>
Shun-Hsiang Hsu<sup>†</sup>, <strong>Yoonhwa Jung</strong><sup>†</sup>, Junryu Fu, Mani Golparvar-Fard
<br>
<em>Proceedings of the CSCE Construction Specialty Conference and ASCE Construction Research Congress</em>, 2025
<br>
<a href="https://doi.org/10.22260/CRC-CSCE-2025/0106">Conference paper</a> (<sup>†</sup>Equal contribution)
<br><br>
<p>A multi-agent system based on Llama 3.2 vision model is compared to a RAG-based single-agent system based on pseudo ground-truth reports from OpenAI's GPT-4o for inspection reporting workflows: bridge inspection, building inspection, indoor progress reporting, outdoor progress reporting, and facility management inspection.
<br><br>
</p>
</div>
</div>
<div class="container2 highlight">
<div class="column-img">
<div class="video-container">
<video width="100%" height="100%" muted autoplay loop style="position: absolute; top: 0; left: 0; opacity: 1; z-index: -1;">
<img src="images/MAS.JPG" type="video/jpg">
Your browser does not support the video tag.
</video>
<img src="images/carbon.jpg" style="width: 100%; display: block;">
</div>
</div>
<div class="column-text">
<a href="">
<span class="papertitle">The True Environmental Cost of Data Centers: An Insight into the Construction Phase of the Era of Artificial Intelligence</span>
</a>
<br><br>
Juan D. Núñez-Morales<sup>†</sup>, <strong>Yoonhwa Jung</strong><sup>†</sup>, Mani Golparvar-Fard
<br>
<em>CIB Conferences</em>, 2025
<br>
<a href="https://doi.org/10.7771/3067-4883.2129">Conference paper</a> (<sup>†</sup>Equal contribution)
<br><br>
<p>Built on <a href="https://doi.org/10.1016/j.autcon.2023.105183">UniformatBridge</a>, ASTM Uniformat classification is utilized to analyze embodied carbon emisions aligned with schedule activities. Comparing with commercial residential building construction, the result paves a way toward sustainable data center construction operations.
<br><br>
</p>
</div>
</div>
<div class="container2 highlight">
<div class="column-img">
<div class="video-container">
<video width="100%" height="100%" muted autoplay loop style="position: absolute; top: 0; left: 0; opacity: 1; z-index: -1;">
<source src="images/ConstructVoiceBot.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
<img src="images/CVB.png" style="width: 100%; display: block;">
</div>
</div>
<div class="column-text">
<a href="">
<span class="papertitle">Voice-assisted AI-Chatbot for Construction: Speech-to-Text Deep Learning Transformer trained with Synthetic Datasets</span>
</a>
<br><br>
<strong>Yoonhwa Jung</strong>, Mani Golparvar-Fard
<br>
<em>Computing in Civil Engineering</em>, 2024
<br>
<a href="https://doi.org/10.1061/9780784486115.094">Conference paper</a>
<br><br>
<p>This paper presents a Voice-activated Artificial Intelligence (AI) assistant, CONSTRUCTVOICEBOT. Our solution leverages the first domain-specific Speech-to-Text Transformer model, called CON-WHISPER, with a synthetic voice-text dataset from 35 commercial building projects. We delve into practical applications through use cases such as Time and Material reporting, daily construction reporting, quality assurance, and curation of construction workflows.
<br><br>
</p>
</div>
</div>
<div class="container2 highlight">
<div class="column-img">
<div class="video-container">
<video width="100%" height="100%" muted autoplay loop style="position: absolute; top: 0; left: 0; opacity: 1; z-index: -1;">
<img src="images/isarc_2024.png" type="video/mp4">
Your browser does not support the video tag.
</video>
<img src="images/isarc_2024.png" style="width: 100%; display: block;">
</div>
</div>
<div class="column-text">
<a href="">
<span class="papertitle">Evaluation of Mapping Computer Vision Segmentation from Reality Capture to Schedule Activities for Construction Monitoring in the Absence of Detailed BIM</span>
</a>
<br><br>
Juan D. Núñez-Morales<sup>†</sup>, <strong>Yoonhwa Jung</strong><sup>†</sup>, Mani Golparvar-Fard
<br>
<em>Proceedings of the 41th ISARC</em>, 2024
<br>
<a href="">Conference paper</a> (<sup>†</sup>Equal contribution)
<br><br>
<p>Built on <a href="https://doi.org/10.1016/j.autcon.2023.105183">UniformatBridge</a>, ASTM Uniformat classification is utilized to map color-coded 3D point clouds aligned with schedule activities without relying on BIM as a baseline. Exemplary results on tied new transformer-based models with few-shot learning are shown.
<br><br>
</p>
</div>
</div>
<div class="container2">
<div class="column-img">
<div class="video-container">
<video width="100%" height="100%" muted autoplay loop style="position: absolute; top: 0; left: 0; opacity: 1; z-index: -1;">
<img src="images/CRC_2024.png">
Your browser does not support the video tag.
</video>
<img src="images/CRC_2024.png" style="width: 100%; display: block;">
</div>
</div>
<div class="column-text">
<a href="https://doi.org/10.1061/9780784485262.084">
<span class="papertitle">Bi-Directional Image-to-Text Mapping for NLP-Based Schedule Generation and Computer Vision Progress Monitoring</span>
</a>
<br><br>
Juan D. Núñez-Morales, <strong>Yoonhwa Jung</strong>, Mani Golparvar-Fard
<br>
<em>Construction Research Congress</em>, 2024
<br>
<a href="https://doi.org/10.1061/9780784485262.084">Conference paper</a>
<br><br>
<p>AIConstruct system is presented to demonstrate, for the first time, how the integration of text and image can create seamless data synchronization for construction progress monitoring and automated schedule generation, unlocking a new research paradigm.</p>
</div>
</div>
<div class="container2">
<div class="column-img">
<div class="video-container">
<video width="100%" height="100%" muted autoplay loop style="position: absolute; top: 0; left: 0; opacity: 1; z-index: -1;">
<img src="images/schedule_health.png" type="video/mp4">
Your browser does not support the video tag.
</video>
<img src="images/schedule_health.png" style="width: 100%; display: block;">
</div>
</div>
<div class="column-text">
<a href="https://doi.org/10.1061/9780784483978.037">
<span class="papertitle">Integrated Heuristic and Machine Learning Approach for Schedule Health Monitoring in Construction</span>
</a>
<br><br>
<strong>Yoonhwa Jung</strong>, Fouad Amer, Mani Golparvar-Fard
<br>
<em>Construction Research Congress</em>, 2022
<br>
<a href="">Journal paper (in preparation)</a>/
<a href="https://doi.org/10.1061/9780784483978.037">Conference paper</a>
<br><br>
<p>Building on the predefined rules and heuristics formulated in the Defense Contract Management Agency (DCMA)’s 14 Point Schedule Quality Assessment, this paper explores the feasibility of heuristic-based and deep learning methods to assess a project schedule health from qualitative and quantitative perspectives.</p>
</div>
</div>
<div class="container2">
<div class="column-img">
<div class="organ">
<h2>Review <br> Paper</h2>
</div>
</div>
<div class="column-text">
<a href="http://itc.scix.net/paper/w78-2021-paper-043">
<span class="papertitle">A systematic review on the requirements on BIM maturity and formal representation of sequencing knowledge for automated construction scheduling</span>
</a>
<br><br>
<strong>Yoonhwa Jung</strong>, Fouad Amer, Mani Golparvar-Fard
<br>
<em>Proceedings of the 38th International Conference of CIB W78, Luxembourg</em>, 2021
<br>
<a href="http://itc.scix.net/paper/w78-2021-paper-043">Conference paper</a>
<br><br>
<p>A close examination on the problems underpinning construction scheduling theory and practice such as sequencing logic and activity description by offering a systematic review on: 1) the way in which BIM-driven schedules are formalized; and 2) the challenges of tying in Building Information Modeling (BIM) with project schedules and/or BIM-driven schedule creation techniques.</p>
</div>
</div>
<br>
<h3>Computer Science papers</h3>
<div class="container2">
<div class="column-img">
<div class="video-container">
<video width="100%" height="100%" muted autoplay loop style="position: absolute; top: 0; left: 0; opacity: 1; z-index: -1;">
<source src="images/sir_absc.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
<img src="images/sir_absc_cover.png" style="width: 100%; display: block;">
</div>
</div>
<div class="column-text">
<a href="https://aclanthology.org/2023.findings-emnlp.572/">
<span class="papertitle">SIR-ABSC: Incorporating Syntax into RoBERTa-based Sentiment Analysis Models with a Special Aggregator Token</span>
</a>
<br><br>
Ikhyun Cho, <strong>Yoonhwa Jung</strong>, Julia Hockenmaier
<br>
<em>EMNLP Findings</em>, 2023
<br>
<a href="https://github.com/ihcho2/SIR-ABSC">project page</a>/
<a href="https://aclanthology.org/2023.findings-emnlp.572.pdf">Paper</a>
<br><br>
<p>We present a simple, but effective method to incorporate syntactic dependency information directly into transformer-based language models (e.g. RoBERTa) for Aspect-Based Sentiment Classification (ABSC). Yet, SIR-ABSC outperforms these more complex models, yielding new state-of-the-art results on ABSC.</p>
</div>
</div>
<div class="container2">
<div class="column-img">
<div class="video-container">
<img src="images/ptmoecap_concealed.png" data-hover="images/ptmoecap_concealed.png">
</div>
</div>
<div class="column-text">
<a href="https://yifanjiang19.github.io/alignerf">
<span class="papertitle">Prompting for Mixture-of-Experts: A Prompt-based Mixture-of-Experts framework for Stylized Image Captioning</span>
</a>
<br><br>
Ikhyun Cho, <strong>Yoonhwa Jung</strong>, Julia Hockenmaier
<br>
<em>Preparing a submission</em>, 2025
<br>
<a href=""></a>
<br>
<p>We introduce PTMoE-Cap, a simple yet effective approach for generating stylized image captions, leveraging the synergy of Mixture-of-Experts (MoE) and prompt learning techniques as a effective routing source.</p>
</div>
</div>
<div class="container2 highlight">
<div class="organ">
<h2>Machine <br> Unlearning</h2>
</div>
<div class="column-text">
<a href="https://github.com/joonv2/ARU">
<span class="papertitle">Attack and Reset for Unlearning: Exploiting Adversarial Noise toward Machine Unlearning through Parameter Re-initialization</span>
</a>
<br><br>
<strong>Yoonhwa Jung</strong>, Ikhyun Cho, Shun-Hsiang Hsu, Julia Hockenmaier
<br>
<em>Preparing a submission</em>, 2025
<br>
<a href="https://github.com/joonv2/ARU">project page</a>/
<a href="https://arxiv.org/abs/2401.08998">arXiv</a>
<br><br>
<p>We leverage meticulously crafted adversarial noise to generate a parameter mask, effectively resetting certain parameters and rendering them unlearnable. A novel approach called Attack-and-Reset for Unlearning (ARU) outperforms current state-of-the-art results on two facial machine-unlearning benchmark datasets.</p>
</div>
</div>
</div>
<div class="section" id="teaching">
<h2>Teaching</h2>
<div class="container2">
<div class="column-img">
<div class="teaching-img">
<img src="images/teach.jpg">
</div>
</div>
<div class="left-text">
<strong><a href="">UIUC CEE 320 - Construction Engineering</a></strong>
<br>
Teaching Assistant (Leader TA), Fall 2023
<br> ㄴ (selected one of the best courses from student evaluation)
<br>
Teaching Assistant, Fall 2021, Spring 2022, Fall 2022, Spring 2023
<br>
<br>
<strong>Summer High School Science Program</strong>
<br>
Teacher, Mirim Girls’ Info. Sci. High School, Seoul, Jul. 2015
</div>
</div>
</div>
<div class="section" id="reviewer">
<h2>Services</h2>
<div class="PA-p">
<p>
Vice President | Korean Women’s Grad. Stdnt. Assoc. of Civil and Env. Eng., UIUC | 2024 – 2025
</p>
<p>
Student President | <a href="https://www.kacepma.org/">Korean-American CEPM Association</a> | 2024 – 2025
</p>
<p>
Reviewer | ASCE Journal of Construction Engineering and Management | 2024 – Present
</p>
<p>
Reviewer | Elsevier Automation in Construction | 2024 – Present
</p>
<p>
Reviewer | ASCE Journal of Computing in Civil Engineering | 2023 – Present
</p>
<p>
Member | Association for Computational Linguistics | 2023 – Present
</p>
<p>
Student member | ASCE Data Sensing and Analysis Committee | 2023 – Present
</p>
<p>
Student member | ASCE Visualization, Information Modeling, and Simulation | 2023 – Present
</p>
<p>
Student member | American Society of Civil Engineers (ASCE) 2022 – Present
</p>
<p>
Student member | Korean Society of Civil Engineers (KSCE) | 2018 – 2019
</p>
<p>
Director of Students’ Association Marketing Depart. | Hanyang Univ. | Mar. 2015 – Dec 2016
</p>
</div>
</div>
<div class="section" id="affiliations">
<h2>Affiliations</h2>
<div class="affiliations">
<div class="affiliation">
<a href="https://www.lsu.edu/eng/cm/" target="_blank">
<img src="images/lsu.png" alt="LSU"></a>
<font>LSU<br>2025 - Current</font>
</div>
<div class="affiliation">
<a href="https://www.documentcrunch.com/" target="_blank">
<img src="images/DC.png" alt="Document Crunch"></a>
<font>Document Crunch<br>2024 - 2025</font>
</div>
<!-- <div class="affiliation">
<a href="https://raamac.cee.illinois.edu/team-2023" target="_blank">
<img src="images/raamac.png" alt="RAAMAC AI Lab"></a>
<font>RAAMAC AI Lab<br>2019 - 2025</font>
</div> -->
<div class="affiliation">
<a href="https://cs.illinois.edu/" target="_blank">
<img src="images/uiuccs.jpg" alt="UIUC CS NLP Group"></a>
<font>UIUC CS NLP Group<br>2022 - 2025 </font>
</div>
<div class="affiliation">
<a href="https://cee.illinois.edu/" target="_blank">
<img src="images/uiuccee.png" alt="UIUC CEE"></a>
<font>UIUC CEE <br>2019 - 2025</font>
</div>
<div class="affiliation">
<a href="https://www.hanyang.ac.kr/web/eng/home" target="_blank">
<img src="images/hanyang.png" alt="Hanyang Univ."></a>
<font>Hanyang Univ.<br>2015 - 2019</font>
</div>
</div>
</div>
<footer>
template credits: <a href="https://jonbarron.info/">this</a> and <a href="https://djr2015.github.io/">this</a>.
image credits: Yoonhwa Jung, <a href="https://firefly.adobe.com/">ai-generated</a>.
</footer>
<script>
document.querySelectorAll('.video-container').forEach(function(container) {
const video = container.querySelector('video');
const img = container.querySelector('img');
const originalSrc = img.src;
const hoverSrc = img.getAttribute('data-hover'); // 호버 시 바꿀 이미지 경로
img.addEventListener('mouseenter', function() {
if (video) {
img.style.opacity = '0';
video.style.opacity = '1';
video.play();
} else if (hoverSrc) {
img.src = hoverSrc; // 이미지 교체
}
});
img.addEventListener('mouseleave', function() {
if (video) {
video.pause();
video.currentTime = 0;
img.style.opacity = '1';
video.style.opacity = '0';
} else {
img.src = originalSrc; // 원래 이미지로 복원
}
});
});
</script>
</body>
</html>