-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathpublications.html
More file actions
1 lines (1 loc) · 40.1 KB
/
publications.html
File metadata and controls
1 lines (1 loc) · 40.1 KB
1
<!DOCTYPE html><html><head><meta name="viewport" content="width=device-width"/><meta charSet="utf-8"/><title>A. Gkaravelis | Publication List</title><meta name="description" content="Research Publications of Anastasios Gkaravelis"/><link rel="icon" href="images/icon1.jpg"/><meta name="robots" content="follow, index"/><link rel="canonical" href="https://agkaravelis.com/publications"/><meta property="og:url" content="https://agkaravelis.com/publications"/><meta property="og:type" content="website"/><meta property="og:site_name" content="Anastasios Gkaravelis"/><meta property="og:title" content="A. Gkaravelis | Publication List"/><meta property="og:description" content="Research Publications of Anastasios Gkaravelis"/><meta property="og:image" content="projects/lotus.jpg"/><meta name="twitter:card" content="projects/lotus.jpg"/><meta name="twitter:site" content="@anastasios_Gk"/><meta name="twitter:title" content="A. Gkaravelis | Publication List"/><meta name="twitter:description" content="Research Publications of Anastasios Gkaravelis"/><meta name="twitter:image" content="projects/lotus.jpg"/><meta name="next-head-count" content="18"/><link rel="preload" href="/_next/static/css/23032560604615c0.css" as="style"/><link rel="stylesheet" href="/_next/static/css/23032560604615c0.css" data-n-g=""/><link rel="preload" href="/_next/static/css/b6da658441850eb7.css" as="style"/><link rel="stylesheet" href="/_next/static/css/b6da658441850eb7.css" data-n-p=""/><noscript data-n-css=""></noscript><script defer="" nomodule="" src="/_next/static/chunks/polyfills-5cd94c89d3acac5f.js"></script><script src="/_next/static/chunks/webpack-e56be2324ea52bca.js" defer=""></script><script src="/_next/static/chunks/framework-5f4595e5518b5600.js" defer=""></script><script src="/_next/static/chunks/main-a054bbf31fb90f6a.js" defer=""></script><script src="/_next/static/chunks/pages/_app-e8dee45e645b8596.js" defer=""></script><script src="/_next/static/chunks/706-4d6bf51144d795db.js" defer=""></script><script src="/_next/static/chunks/pages/publications-72284fe7b4948a43.js" defer=""></script><script src="/_next/static/mv3oqj_dj26xhTJXP5WBB/_buildManifest.js" defer=""></script><script src="/_next/static/mv3oqj_dj26xhTJXP5WBB/_ssgManifest.js" defer=""></script><script src="/_next/static/mv3oqj_dj26xhTJXP5WBB/_middlewareManifest.js" defer=""></script></head><body><div id="__next" data-reactroot=""><div><header class="Header2_header__NDpuY"><div class="Header2_container__PuUOr"><div>Anastasios Gkaravelis</div><nav class="Header2_nav__OqKGx"><span class="Header2_site_links__5dgZW">Home</span><span class="Header2_site_links__5dgZW">Projects</span><span class="Header2_site_links__5dgZW">Publications</span><span class="Header2_site_links__5dgZW">Resume</span></nav><div class="Header2_social_container__sGYId"><div class="Header2_social__VTV42"><a href="https://github.com/agkar"><img src="images/github-mark-white.png" width="28" height="28" alt="github link"/></a></div><div class="Header2_social__VTV42"><a href="https://twitter.com/anastasios_Gk"><img src="images/Twitter_Social_Icon_Circle_Color.png" width="28" height="28" alt="twitter link"/></a></div><div class="Header2_social__VTV42"><a href="https://www.linkedin.com/in/agkaravelis/"><img src="images/LI-In-Bug.png" width="28" height="28" alt="linkedin link"/></a></div><div class="Header2_social__VTV42"><a href="mailto:a.gkaravelis@gmail.com" target="_top"><svg xmlns="http://www.w3.org/2000/svg" width="28" height="28" viewBox="-2 -4 24 28" fill="none" stroke="rgb(220,220,220)" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"><path d="M0 0 L20 0 L20 20 L0 20 L0 0 L10 8 L20 0"></path></svg></a></div></div></div></header><main><h1>Publication List</h1><article class="Publication_projectStyle__2jWzH"><div class="Publication_perspective__HvdNv"><div class="Publication_card_perspective__sUX45"><img class="Publication_thumbnail__lL9FI" src="publications/od_cgi23.webp" alt="opening cgi publication"/></div></div><div class="Publication_projectDescription__TR8Nq"><div class="Publication_projectTitle__yTLBh">Opening Design using Bayesian Optimization</div><div class="Publication_abstract__qipCa">N. Vitsas, I. Evangelou, G. Papaioannou, <!-- --> <strong>A. Gkaravelis</strong> </div><div class="Publication_abstract__qipCa" style="margin-bottom:0.75rem">Virtual Reality and Intelligent Hardware, Proc. Computer Graphics International, 2023</div><div class="Publication_abstract__qipCa" style="margin-bottom:0.75rem"><strong>Abstract.</strong> <!-- -->Opening design is a major consideration in architectural buildings during early structural layout specification. Decisions regarding the geometric characteristics of windows, skylights, hatches, etc., greatly impact the overall energy efficiency, airflow and appearance of a building, both internally and externally. In this work, we employ a goal-based, illumination-driven approach to opening design using a Bayesian Optimization approach, based on Gaussian Processes. A method is proposed that allows a designer to easily set lighting intentions along with qualitative and quantitative characteristics of desired openings. All parameters are optimized within a cost minimization framework to calculate geometrically feasible, architecturally admissible and aesthetically pleasing openings of any desired shape, while respecting the designer's lighting constraints.</div><div class="Publication_pubMediaContainer__nkElB"><a class="Publication_pubMediaRef__9Ek0E" href="dsad" target="__blank"><img width="28" height="28" src="images/PDF_24.png" alt="paper_pdf"/></a></div></div></article><br/><article class="Publication_projectStyle__2jWzH"><div class="Publication_perspective__HvdNv"><div class="Publication_card_perspective__sUX45"><a href="https://link.springer.com/content/pdf/10.1007/s00371-023-02975-y.pdf"><img class="Publication_thumbnail__lL9FI" src="publications/VC2023_Neural_3.webp" alt="neural cgi paper"/></a></div></div><div class="Publication_projectDescription__TR8Nq"><div class="Publication_projectTitle__yTLBh">A Neural Builder for Spatial Subdivision Hierarchies</div><div class="Publication_abstract__qipCa">I. Evangelou, G. Papaioannou, K. Vardis, <!-- --> <strong>A. Gkaravelis</strong> </div><div class="Publication_abstract__qipCa" style="margin-bottom:0.75rem">Visual Computer 2023, proc. CGI 2023</div><div class="Publication_abstract__qipCa" style="margin-bottom:0.75rem"><strong>Abstract.</strong> <!-- -->Spatial data structures, such as k-d trees and bounding volume hierarchies, are extensively used in computer graphics for the acceleration of spatial queries in ray tracing, nearest neighbour searches and other tasks. Typically, the splitting strategy employed during the construction of such structures is based on the greedy evaluation of a predefined objective function, resulting in a less than optimal subdivision scheme. In this work, for the first time, we propose the use of unsupervised deep learning to infer the structure of a fixed-depth k-d tree from a constant, subsampled set of the input primitives, based on the recursive evaluation of the cost function at hand. This results in high-quality upper spatial hierarchy, inferred in constant time and without paying the intractable price of a fully recursive tree optimisation. The resulting fixed-depth tree can then be further expanded, in parallel, into either a full k-d tree or transformed into a bounding volume hierarchy, with any known conventional tree builder. The approach is generic enough to accommodate different cost functions, such as the popular surface area and volume heuristics. We experimentally validate that the resulting hierarchies have competitive traversal performance with respect to established tree builders, while maintaining minimal overhead in construction times.</div><div class="Publication_pubMediaContainer__nkElB"><a class="Publication_pubMediaRef__9Ek0E" href="https://link.springer.com/content/pdf/10.1007/s00371-023-02975-y.pdf" target="__blank"><img width="28" height="28" src="images/PDF_24.png" alt="paper_pdf"/></a></div></div></article><br/><article class="Publication_projectStyle__2jWzH"><div class="Publication_perspective__HvdNv"><div class="Publication_card_perspective__sUX45"><a href="https://lotus.aueb.gr/content/EG23_OBB_Tree_Builder.pdf"><img class="Publication_thumbnail__lL9FI" src="publications/EG23_OBB_Tree_Builder2.webp" alt="parallel eg paper"/></a></div></div><div class="Publication_projectDescription__TR8Nq"><div class="Publication_projectTitle__yTLBh">Parallel Transformation of Bounding Volume Hierarchies into Oriented Bounding Box Trees</div><div class="Publication_abstract__qipCa">N. Vitsas, I. Evangelou, G. Papaioannou, <!-- --> <strong>A. Gkaravelis</strong> </div><div class="Publication_abstract__qipCa" style="margin-bottom:0.75rem">Computer Graphics Forum 2023, proc. Eurographics 2023</div><div class="Publication_abstract__qipCa" style="margin-bottom:0.75rem"><strong>Abstract.</strong> <!-- -->Oriented bounding box (OBB) hierarchies can be used instead of hierarchies based on axis-aligned bounding boxes (AABB), providing tighter fitting to the underlying geometric structures and resulting in improved interference tests, such as ray-geometry intersections. In this paper, we present a method for the fast, parallel transformation of an existing bounding volume hierarchy (BVH), based on AABBs, into a hierarchy based on oriented bounding boxes. To this end, we parallelise a high-quality OBB extraction algorithm from the literature to operate as a standalone OBB estimator and further extend it to efficiently build an OBB hierarchy in a bottom up manner. This agglomerative approach allows for fast parallel execution and the formation of arbitrary, high-quality OBBs in bounding volume hierarchies. The method is fully implemented on the GPU and extensively evaluated with ray intersections.</div><div class="Publication_pubMediaContainer__nkElB"><a class="Publication_pubMediaRef__9Ek0E" href="https://lotus.aueb.gr/content/EG23_OBB_Tree_Builder.pdf" target="__blank"><img width="28" height="28" src="images/PDF_24.png" alt="paper_pdf"/></a><a class="Publication_pubMediaRef__9Ek0E" href="https://youtu.be/zCg31NUkZGs" target="__blank"><img width="28" height="28" src="images/video.svg" alt="paper_video"/></a><a class="Publication_pubMediaRef__9Ek0E" href="https://lotus.aueb.gr/content/obvh_presentation.pdf" target="__blank"><img width="28" height="28" src="images/presentation.svg" alt="paper_presentation"/></a><a class="Publication_pubMediaRef__9Ek0E" href="https://github.com/cgaueb/obvh" target="__blank"><img width="28" height="28" src="images/github-mark.png" alt="paper_code"/></a></div></div></article><br/><article class="Publication_projectStyle__2jWzH"><div class="Publication_perspective__HvdNv"><div class="Publication_card_perspective__sUX45"><a href="http://graphics.cs.aueb.gr/graphics/docs/papers/Virtual_Sculpting.pdf"><img class="Publication_thumbnail__lL9FI" src="publications/sculpting.webp" alt="parallel eg paper"/></a></div></div><div class="Publication_projectDescription__TR8Nq"><div class="Publication_projectTitle__yTLBh">Real-Time Volume Editing on Low-Power Virtual Reality Devices</div><div class="Publication_abstract__qipCa">I. Evangelou, <!-- --> <strong>A. Gkaravelis</strong> <!-- -->, G. Papaioannou</div><div class="Publication_abstract__qipCa" style="margin-bottom:0.75rem">Proc. 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (GRAPP), 2023</div><div class="Publication_abstract__qipCa" style="margin-bottom:0.75rem"><strong>Abstract.</strong> <!-- -->The advent of consumer-grade, low-power, untethered virtual reality devices has spurred the creation of numerous applications, with important implications to training, socialisation, education and entertainment. However, such devices are typically based on modified mobile architectures and processing units, offering limited capabilities in terms of geometry and shading throughput, compared to their desktop counterparts. In this work we provide insights on how to implement two combined and particularly challenging tasks on such a platform, those of real-time volume editing and physically-based rendering. We implement and showcase our techniques in the context of a virtual sculpting edutainment application, intended for mass deployment at a virtual reality exhibition centre.</div><div class="Publication_pubMediaContainer__nkElB"><a class="Publication_pubMediaRef__9Ek0E" href="http://graphics.cs.aueb.gr/graphics/docs/papers/Virtual_Sculpting.pdf" target="__blank"><img width="28" height="28" src="images/PDF_24.png" alt="paper_pdf"/></a></div></div></article><br/><article class="Publication_projectStyle__2jWzH"><div class="Publication_perspective__HvdNv"><div class="Publication_card_perspective__sUX45"><a href="http://graphics.cs.aueb.gr/graphics/docs/papers/RemoteTeachingRG.pdf"><img class="Publication_thumbnail__lL9FI" src="publications/remote_teaching.jpg" alt="parallel eg paper"/></a></div></div><div class="Publication_projectDescription__TR8Nq"><div class="Publication_projectTitle__yTLBh">Remote Teaching Advanced Rendering Topics Using the Rayground Platform</div><div class="Publication_abstract__qipCa">A. A. Vasilakis, G. Papaioannou, N. Vitsas, <!-- --> <strong>A. Gkaravelis</strong> </div><div class="Publication_abstract__qipCa" style="margin-bottom:0.75rem">IEEE Computer Graphics and Applications, 41(5), p. 99-103, 2021.</div><div class="Publication_abstract__qipCa" style="margin-bottom:0.75rem"><strong>Abstract.</strong> <!-- -->Rayground is a novel online framework for fast prototyping and interactive demonstration of ray tracing algorithms. It aims to set the ground for the online development of ray-traced visualization algorithms in an accessible manner for everyone, stripping off the mechanics that get in the way of creativity and the understanding of the core concepts. Due to the COVID-19 pandemic, remote teaching and online coursework have taken center stage. In this work, we demonstrate how Rayground can incorporate advanced instructive rendering media during online lectures as well as offer attractive student assignments in an engaging, hands-on manner. We cover things to consider when building or porting methods to this new development platform, best practices in remote teaching and learning activities, and time-tested assessment and grading strategies suitable for fully online university courses.</div><div class="Publication_pubMediaContainer__nkElB"><a class="Publication_pubMediaRef__9Ek0E" href="http://graphics.cs.aueb.gr/graphics/docs/papers/RemoteTeachingRG.pdf" target="__blank"><img width="28" height="28" src="images/PDF_24.png" alt="paper_pdf"/></a><a class="Publication_pubMediaRef__9Ek0E" href="https://cgaueb.github.io/publications/remote_teaching_rg/" target="__blank"><img width="28" height="28" src="images/github-mark.png" alt="paper_code"/></a></div></div></article><br/><article class="Publication_projectStyle__2jWzH"><div class="Publication_perspective__HvdNv"><div class="Publication_card_perspective__sUX45"><a href="http://graphics.cs.aueb.gr/graphics/docs/papers/webrays.pdf"><img class="Publication_thumbnail__lL9FI" src="publications/webrays.webp" alt="parallel eg paper"/></a></div></div><div class="Publication_projectDescription__TR8Nq"><div class="Publication_projectTitle__yTLBh">WEBRAYS: Ray Tracing on the Web</div><div class="Publication_abstract__qipCa">N. Vitsas, <!-- --> <strong>A. Gkaravelis</strong> <!-- -->, A. A. Vasilakis, G. Papaioannou</div><div class="Publication_abstract__qipCa" style="margin-bottom:0.75rem">Ray Tracing Gems II, A. Marrs (Ed.), P. Shirley (Ed.), I. Wald (Ed.), ISBN 978-1-4842-4427-2, p. 281-299, 2021.</div><div class="Publication_abstract__qipCa" style="margin-bottom:0.75rem"><strong>Abstract.</strong> <!-- -->This chapter introduces WebRays, a GPU-accelerated ray intersection engine for the World Wide Web. It aims to offer a flexible and easy-to-use programming interface for robust and high-performance ray intersection tests on modern browsers. We cover design considerations, best practices, and usage examples for several ray tracing tasks.</div><div class="Publication_pubMediaContainer__nkElB"><a class="Publication_pubMediaRef__9Ek0E" href="http://graphics.cs.aueb.gr/graphics/docs/papers/webrays.pdf" target="__blank"><img width="28" height="28" src="images/PDF_24.png" alt="paper_pdf"/></a><a class="Publication_pubMediaRef__9Ek0E" href="https://cgaueb.github.io/publications/webrays/" target="__blank"><img width="28" height="28" src="images/github-mark.png" alt="paper_code"/></a></div></div></article><br/><article class="Publication_projectStyle__2jWzH"><div class="Publication_perspective__HvdNv"><div class="Publication_card_perspective__sUX45"><a href="http://graphics.cs.aueb.gr/graphics/docs/papers/Rayground___EG_2020_Educational.pdf"><img class="Publication_thumbnail__lL9FI" src="projects/rayground.webp" alt="parallel eg paper"/></a></div></div><div class="Publication_projectDescription__TR8Nq"><div class="Publication_projectTitle__yTLBh">Rayground: An Online Educational Tool for Ray Tracing</div><div class="Publication_abstract__qipCa">N. Vitsas, <!-- --> <strong>A. Gkaravelis</strong> <!-- -->, A. A. Vasilakis, K. Vardis, G. Papaioannou</div><div class="Publication_abstract__qipCa" style="margin-bottom:0.75rem">Eurographics 2020 Educational paper track, May, 2020</div><div class="Publication_abstract__qipCa" style="margin-bottom:0.75rem"><strong>Abstract.</strong> <!-- -->In this paper, we present Rayground; an online, interactive education tool for richer in-class teaching and gradual self-study, which provides a convenient introduction into practical ray tracing through a standard shader-based programming interface. Setting up a basic ray tracing framework via modern graphics APIs, such as DirectX 12 and Vulkan, results in complex and verbose code that can be intimidating even for very competent students. On the other hand, Rayground aims to demystify ray tracing fundamentals, by providing a well-defined WebGL-based programmable graphics pipeline of configurable distinct ray tracing stages coupled with a simple scene description format. An extensive discussion is further offered describing how both undergraduate and postgraduate computer graphics theoretical lectures and laboratory sessions can be enhanced by our work, to achieve a broad understanding of the underlying concepts. Rayground is open, cross-platform, and available to everyone.</div><div class="Publication_pubMediaContainer__nkElB"><a class="Publication_pubMediaRef__9Ek0E" href="http://graphics.cs.aueb.gr/graphics/docs/papers/Rayground___EG_2020_Educational.pdf" target="__blank"><img width="28" height="28" src="images/PDF_24.png" alt="paper_pdf"/></a><a class="Publication_pubMediaRef__9Ek0E" href="https://www.youtube.com/watch?v=CQycwzFrbSo" target="__blank"><img width="28" height="28" src="images/video.svg" alt="paper_video"/></a><a class="Publication_pubMediaRef__9Ek0E" href="https://www.youtube.com/watch?v=isLY6yUIMMA" target="__blank"><img width="28" height="28" src="images/presentation.svg" alt="paper_presentation"/></a><a class="Publication_pubMediaRef__9Ek0E" href="https://rayground.com" target="__blank"><img width="28" height="28" src="images/world.svg" alt="paper_website"/></a></div></div></article><br/><article class="Publication_projectStyle__2jWzH"><div class="Publication_perspective__HvdNv"><div class="Publication_card_perspective__sUX45"><a href="https://diglib.eg.org/bitstream/handle/10.1111/cgf13930/v39i2pp291-301.pdf"><img class="Publication_thumbnail__lL9FI" src="publications/iflo.jpg" alt="parallel eg paper"/></a></div></div><div class="Publication_projectDescription__TR8Nq"><div class="Publication_projectTitle__yTLBh">Illumination-Guided Furniture Layout Optimization</div><div class="Publication_abstract__qipCa">N. Vitsas, G. Papaioannou , <!-- --> <strong>A. Gkaravelis</strong> <!-- -->, A. A. Vasilakis</div><div class="Publication_abstract__qipCa" style="margin-bottom:0.75rem">Computer Graphics Forum (proc. Eurographics 2020), 39(2), 2020.</div><div class="Publication_abstract__qipCa" style="margin-bottom:0.75rem"><strong>Abstract.</strong> <!-- -->Lighting plays a very important role in interior design. However, in the specific problem of furniture layout recommendation, illumination has been either neglected or addressed with empirical or very simplified solutions. The effectiveness of a particular layout in its expected task performance can be greatly affected by daylighting and artificial illumination in a non-trivial manner. In this paper, we introduce a robust method for furniture layout optimization guided by illumination constraints. The method takes into account all dominant light sources, such as sun light, skylighting and fixtures, while also being able to handle movable light emitters. For this task, the method introduces multiple generic illumination constraints and physically-based light transport estimators, operating alongside typical geometric design guidelines, in a unified manner. We demonstrate how to produce furniture arrangements that comply with important safety, comfort and efficiency illumination criteria, such as glare suppression, under complex light-environment interactions, which are very hard to handle using empirical or simplified models.</div><div class="Publication_pubMediaContainer__nkElB"><a class="Publication_pubMediaRef__9Ek0E" href="https://diglib.eg.org/bitstream/handle/10.1111/cgf13930/v39i2pp291-301.pdf" target="__blank"><img width="28" height="28" src="images/PDF_24.png" alt="paper_pdf"/></a><a class="Publication_pubMediaRef__9Ek0E" href="https://www.youtube.com/watch?v=0MtZp-0CHfs" target="__blank"><img width="28" height="28" src="images/video.svg" alt="paper_video"/></a><a class="Publication_pubMediaRef__9Ek0E" href="https://www.youtube.com/watch?v=d8yRxoVStXs" target="__blank"><img width="28" height="28" src="images/presentation.svg" alt="paper_presentation"/></a></div></div></article><br/><article class="Publication_projectStyle__2jWzH"><div class="Publication_perspective__HvdNv"><div class="Publication_card_perspective__sUX45"><a href="http://graphics.cs.aueb.gr/graphics/docs/papers/DetailHighlighting2.pdf"><img class="Publication_thumbnail__lL9FI" src="publications/marbles.jpg" alt="parallel eg paper"/></a></div></div><div class="Publication_projectDescription__TR8Nq"><div class="Publication_projectTitle__yTLBh">Light Optimization for Detail Highlighting</div><div class="Publication_abstract__qipCa"> <strong>A. Gkaravelis</strong> <!-- -->, G. Papaioannou</div><div class="Publication_abstract__qipCa" style="margin-bottom:0.75rem">Computer Graphics Forum (proc. Pacific Graphics 2018), 37(7), pp. 37-44, October, 2018.</div><div class="Publication_abstract__qipCa" style="margin-bottom:0.75rem"><strong>Abstract.</strong> <!-- -->In this paper we propose an effective technique for the automatic arrangement of spot lights and other luminaires on or near user-provided arbitrary mounting surfaces in order to highlight the geometric details of complex objects. Since potential applications include the lighting design for exhibitions and similar installations, the method takes into account obstructing geometry and potential occlusion from visitors and other non-permanent blocking geometry. Our technique generates the most appropriate position and orientation for light sources based on a local contrast maximization near salient geometric features and a clustering mechanism, producing consistent and view-independent results, with minimal user intervention. We validate our method with realistic test cases including multiple and disjoint exhibits as well as high occlusion scenarios.</div><div class="Publication_pubMediaContainer__nkElB"><a class="Publication_pubMediaRef__9Ek0E" href="http://graphics.cs.aueb.gr/graphics/docs/papers/DetailHighlighting2.pdf" target="__blank"><img width="28" height="28" src="images/PDF_24.png" alt="paper_pdf"/></a><a class="Publication_pubMediaRef__9Ek0E" href="publications/lodh2018.bib" target="__blank"><img width="28" height="28" src="images/BibIcon.png" alt="paper_bib"/></a></div></div></article><br/><article class="Publication_projectStyle__2jWzH"><div class="Publication_perspective__HvdNv"><div class="Publication_card_perspective__sUX45"><a href="http://graphics.cs.aueb.gr/graphics/docs/papers/FeatureHighlighting.pdf"><img class="Publication_thumbnail__lL9FI" src="publications/seh.webp" alt="parallel eg paper"/></a></div></div><div class="Publication_projectDescription__TR8Nq"><div class="Publication_projectTitle__yTLBh">Detail Highlighting using a Shadow Edge Histogram</div><div class="Publication_abstract__qipCa"> <strong>A. Gkaravelis</strong> <!-- -->, G. Papaioannou</div><div class="Publication_abstract__qipCa" style="margin-bottom:0.75rem">proc. Eurograhpics (short paper), 2017</div><div class="Publication_abstract__qipCa" style="margin-bottom:0.75rem"><strong>Abstract.</strong> <!-- -->In this paper we propose a simple and effective technique for setting up a configuration of directional light sources to accentuate the prominent geometric features of complex objects by increasing the local shadow contrast near them. Practical applications of such a task are encountered among others in professional photography, and cinematography. The method itself, which is based on a voting mechanism, quickly produces consistent and view-independent results, with minimal user intervention.</div><div class="Publication_pubMediaContainer__nkElB"><a class="Publication_pubMediaRef__9Ek0E" href="http://graphics.cs.aueb.gr/graphics/docs/papers/FeatureHighlighting.pdf" target="__blank"><img width="28" height="28" src="images/PDF_24.png" alt="paper_pdf"/></a><a class="Publication_pubMediaRef__9Ek0E" href="publications/seh2017.bib" target="__blank"><img width="28" height="28" src="images/BibIcon.png" alt="paper_bib"/></a></div></div></article><br/><article class="Publication_projectStyle__2jWzH"><div class="Publication_perspective__HvdNv"><div class="Publication_card_perspective__sUX45"><a href="https://dl.acm.org/citation.cfm?id=2962791"><img class="Publication_thumbnail__lL9FI" src="publications/cgi2016.jpg" alt="parallel eg paper"/></a></div></div><div class="Publication_projectDescription__TR8Nq"><div class="Publication_projectTitle__yTLBh">Inverse Lighting Design using a Coverage Optimization Strategy</div><div class="Publication_abstract__qipCa"> <strong>A. Gkaravelis</strong> <!-- -->, G. Papaioannou</div><div class="Publication_abstract__qipCa" style="margin-bottom:0.75rem">The Visual Computer: International Journal of Computer Graphics, Volume 32, Issue 6-8, pp. 771-780, June 2016</div><div class="Publication_abstract__qipCa" style="margin-bottom:0.75rem"><strong>Abstract.</strong> <!-- -->Lighting design is an essential process in computer cinematography, games, architectural design and various other applications for correctly illuminating or highlighting parts of a scene and enhancing storytelling. When targeting specific illumination goals and constraints, this process can be tedious and counterintuitive, even for experienced users and thus automatic, goal-driven methods have emerged for the estimation of a lighting configuration to match the desired result. We present a general automatic approach to such an inverse lighting design problem, where the number of light sources along with their position and emittance are computed given a set of user-specified lighting goals. To this end, we employ a special hierarchical light clustering that operates in the lighting goal coverage domain and overcomes limitations of previous approaches in environments with high occlusion or structural complexity. Our approach is independent of the underlying light transport model and can quickly converge to usable solutions. We validate our results and provide comparative evaluation with the current state of the art.</div><div class="Publication_pubMediaContainer__nkElB"><a class="Publication_pubMediaRef__9Ek0E" href="https://dl.acm.org/citation.cfm?id=2962791" target="__blank"><img width="28" height="28" src="images/PDF_24.png" alt="paper_pdf"/></a><a class="Publication_pubMediaRef__9Ek0E" href="publications/ild2016.bib" target="__blank"><img width="28" height="28" src="images/BibIcon.png" alt="paper_bib"/></a><a class="Publication_pubMediaRef__9Ek0E" href="publications/CGI_2016_Supplementary_material_24.mp4" target="__blank"><img width="28" height="28" src="images/video.svg" alt="paper_video"/></a></div></div></article><br/><article class="Publication_projectStyle__2jWzH"><div class="Publication_perspective__HvdNv"><div class="Publication_card_perspective__sUX45"><a href="http://graphics.cs.aueb.gr/graphics/docs/papers/IGP-short.pdf"><img class="Publication_thumbnail__lL9FI" src="publications/igp15.jpg" alt="parallel eg paper"/></a></div></div><div class="Publication_projectDescription__TR8Nq"><div class="Publication_projectTitle__yTLBh">A Generic Physically-based Approach to the Opening Design Problem</div><div class="Publication_abstract__qipCa">K. Kalampokis, G. Papaioannou, <!-- --> <strong>A. Gkaravelis</strong> </div><div class="Publication_abstract__qipCa" style="margin-bottom:0.75rem">Proc. Eurographics 2016 (short paper), 2016</div><div class="Publication_abstract__qipCa" style="margin-bottom:0.75rem"><strong>Abstract.</strong> <!-- -->Today architectural design harnesses photorealistic rendering to accurately assess energy transport for the design of energyefficient buildings. In this context, we present an automatic physically-based solution to the opening design problem, i.e. the goal-driven process of defining openings on the input geometry given a set of lighting constraints, to better exploit natural daylight. Based on a hierarchical approach that combines a linear optimization strategy and a genetic algorithm, our method computes the optimal number, position, size and shape of openings, using a path tracing-based estimator to precisely model the light transport for arbitrary materials and geometry. The method quickly converges to an opening configuration that optimally approximates the desired illumination, with no special geometry editing requirements and the ability to trade quality for performance for interactive applications. We validate our results against ground truth experiments for various scenes and time-of-day intervals.</div><div class="Publication_pubMediaContainer__nkElB"><a class="Publication_pubMediaRef__9Ek0E" href="http://graphics.cs.aueb.gr/graphics/docs/papers/IGP-short.pdf" target="__blank"><img width="28" height="28" src="images/PDF_24.png" alt="paper_pdf"/></a></div></div></article><br/><article class="Publication_projectStyle__2jWzH"><div class="Publication_perspective__HvdNv"><div class="Publication_card_perspective__sUX45"><a href="http://graphics.cs.aueb.gr/graphics/docs/papers/LightDesign.pdf"><img class="Publication_thumbnail__lL9FI" src="publications/ilp1.jpg" alt="parallel eg paper"/></a></div></div><div class="Publication_projectDescription__TR8Nq"><div class="Publication_projectTitle__yTLBh">Inverse Light Design for High-Occlusion Environments</div><div class="Publication_abstract__qipCa"> <strong>A. Gkaravelis</strong> <!-- -->, G. Papaioannou, K. Kalampokis</div><div class="Publication_abstract__qipCa" style="margin-bottom:0.75rem">GRAPP 2015 Proceedings of the 10th International Conference on Computer Graphics Theory and Applications , pp. 26-34, Berlin, Germany</div><div class="Publication_abstract__qipCa" style="margin-bottom:0.75rem"><strong>Abstract.</strong> <!-- -->Lighting design is a demanding but very important task in computer cinematography, games and architectural design. Computer-assisted lighting design aims at providing the designers with tools to describe the desired outcome and derive a suitable lighting configuration to match their goal. In this paper, we present an automatic approach to the inverse light source emittance and positioning problem, based on a layered linear / non-linear optimization strategy and the introduction of a special light source indexing according to the compatibility of each individual luminary position with the desired illumination. Our approach is independent of a particular light transport model and can quickly converge to an appropriate and plausible light configuration that approximates the desired illumination and can handle environments with high occlusion.</div><div class="Publication_pubMediaContainer__nkElB"><a class="Publication_pubMediaRef__9Ek0E" href="http://graphics.cs.aueb.gr/graphics/docs/papers/LightDesign.pdf" target="__blank"><img width="28" height="28" src="images/PDF_24.png" alt="paper_pdf"/></a><a class="Publication_pubMediaRef__9Ek0E" href="publications/ild1.bib" target="__blank"><img width="28" height="28" src="images/BibIcon.png" alt="paper_bib"/></a></div></div></article><br/><article class="Publication_projectStyle__2jWzH"><div class="Publication_perspective__HvdNv"><div class="Publication_card_perspective__sUX45"><a href="http://jcgt.org/published/0003/04/06/"><img class="Publication_thumbnail__lL9FI" src="publications/crc.jpg" alt="parallel eg paper"/></a></div></div><div class="Publication_projectDescription__TR8Nq"><div class="Publication_projectTitle__yTLBh">Real-time Radiance Caching using Chrominance Compression</div><div class="Publication_abstract__qipCa">K. Vardis, G. Papaioannou, <!-- --> <strong>A. Gkaravelis</strong> </div><div class="Publication_abstract__qipCa" style="margin-bottom:0.75rem">Journal of Computer Graphics Techniques (JCGT), 3(4), pp. 111-131, 2014</div><div class="Publication_abstract__qipCa" style="margin-bottom:0.75rem"><strong>Abstract.</strong> <!-- -->This paper introduces the idea of expressing the radiance field in luminance/chrominance values and encoding the directional chrominance in lower detail. Reducing the spherical harmonics coefficients for the chrominance components allows the storage of luminance in higher order spherical harmonics in the same memory budget resulting in finer representation of intensity transitions. We combine the radiance field chrominance compression with an optimized cache population scheme, by generating cache points only at locations, which are guaranteed to contribute to the reconstructed surface irradiance. These computation and storage savings allow the use of higher-order spherical harmonics representation to sufficiently capture and reconstruct the directionality of diffuse irradiance, while maintaining fast and customizable performance. We exploit this radiance representation in a low-cost real-time radiance caching scheme, with support for arbitrary light bounces and view-independent indirect occlusion and showcase the improvements in highly complex and dynamic environments. Furthermore, our general qualitative evaluation indicates benefits for offline rendering application as well.</div><div class="Publication_pubMediaContainer__nkElB"><a class="Publication_pubMediaRef__9Ek0E" href="http://jcgt.org/published/0003/04/06/" target="__blank"><img width="28" height="28" src="images/PDF_24.png" alt="paper_pdf"/></a><a class="Publication_pubMediaRef__9Ek0E" href="https://jcgt.org/published/0003/04/06/bibtex.bib" target="__blank"><img width="28" height="28" src="images/BibIcon.png" alt="paper_bib"/></a><a class="Publication_pubMediaRef__9Ek0E" href="https://jcgt.org/published/0003/04/06/video.mp4" target="__blank"><img width="28" height="28" src="images/video.svg" alt="paper_video"/></a><a class="Publication_pubMediaRef__9Ek0E" href="https://jcgt.org/published/0003/04/06/demo.zip" target="__blank"><img width="28" height="28" src="images/github-mark.png" alt="paper_code"/></a></div></div></article><br/><h2> Phd Thesis </h2><article class="Publication_projectStyle__2jWzH"><div class="Publication_perspective__HvdNv"><div class="Publication_card_perspective__sUX45"><a href="https://drive.google.com/open?id=1k2TjDT6-lHh8jlYZGc9yMVnTavVg7-iR"><img class="Publication_thumbnail__lL9FI" src="publications/teaserPhd.jpg" alt="parallel eg paper"/></a></div></div><div class="Publication_projectDescription__TR8Nq"><div class="Publication_projectTitle__yTLBh">Efficient Algorithms for Inverse Lighting Design</div><div class="Publication_abstract__qipCa"> <strong>A. Gkaravelis</strong> <!-- -->, G. Papaioannou</div><div class="Publication_abstract__qipCa" style="margin-bottom:0.75rem"></div><div class="Publication_abstract__qipCa" style="margin-bottom:0.75rem"><strong>Abstract.</strong> <!-- -->A dissertation submitted to the Department of Informatics of Athens University of Economics & Business.</div><div class="Publication_pubMediaContainer__nkElB"><a class="Publication_pubMediaRef__9Ek0E" href="https://drive.google.com/open?id=1k2TjDT6-lHh8jlYZGc9yMVnTavVg7-iR" target="__blank"><img width="28" height="28" src="images/PDF_24.png" alt="paper_pdf"/></a></div></div></article><br/><h2> Technical Reports </h2><article class="Publication_projectStyle__2jWzH"><div class="Publication_perspective__HvdNv"><div class="Publication_card_perspective__sUX45"><a href="https://lotus.aueb.gr/content/Lotus_STAR.pdf"><img class="Publication_thumbnail__lL9FI" src="https://lotus.aueb.gr/news/goalvsindirect3.png" alt="STAR on Opening and Urban Lighting Design"/></a></div></div><div class="Publication_projectDescription__TR8Nq"><div class="Publication_projectTitle__yTLBh">State of the Art Report on Opening and Urban Lighting Design</div><div class="Publication_abstract__qipCa">Nikolaos Vitsas, Iordanis Evangelou, Georgios Papaioannou, Eleni Kovanidou, <!-- --> <strong>A. Gkaravelis</strong> </div><div class="Publication_abstract__qipCa" style="margin-bottom:0.75rem"></div><div class="Publication_abstract__qipCa" style="margin-bottom:0.75rem"><strong>Abstract.</strong> <!-- -->This report presents a thorough investigation of the complex and active research area in both opening and urban lighting design.</div><div class="Publication_pubMediaContainer__nkElB"><a class="Publication_pubMediaRef__9Ek0E" href="https://lotus.aueb.gr/content/Lotus_STAR.pdf" target="__blank"><img width="28" height="28" src="images/PDF_24.png" alt="paper_pdf"/></a></div></div></article><br/><article class="Publication_projectStyle__2jWzH"><div class="Publication_perspective__HvdNv"><div class="Publication_card_perspective__sUX45"><a href="http://graphics.cs.aueb.gr/graphics/docs/GLIDE-D1.1.pdf"><img class="Publication_thumbnail__lL9FI" src="publications/T1_1.jpg" alt="STAR on Interactive Global Illumination Techniques and Inverse Lighting Problems"/></a></div></div><div class="Publication_projectDescription__TR8Nq"><div class="Publication_projectTitle__yTLBh">State of the Art Report on Interactive Global Illumination Techniques and Inverse Lighting Problems</div><div class="Publication_abstract__qipCa">A. A. Vasilakis, K. Vardis, <!-- --> <strong>A. Gkaravelis</strong> <!-- -->, G. Papaioannou, K. Kalampokis</div><div class="Publication_abstract__qipCa" style="margin-bottom:0.75rem"></div><div class="Publication_abstract__qipCa" style="margin-bottom:0.75rem"><strong>Abstract.</strong> <!-- -->This report presents a thorough investigation of the complex and active research area in both interactive global illumination and inverse lighting problems, with a focus on interactive applications and dynamic environments.</div><div class="Publication_pubMediaContainer__nkElB"><a class="Publication_pubMediaRef__9Ek0E" href="http://graphics.cs.aueb.gr/graphics/docs/GLIDE-D1.1.pdf" target="__blank"><img width="28" height="28" src="images/PDF_24.png" alt="paper_pdf"/></a></div></div></article><br/></main><footer class="Footer_footer___O5Nl"><div><a class="Footer_github_btn__Na_XQ" href="https://github.com/agkar"><img src="images/github-mark.png" width="28" height="28" alt="github link"/></a><a class="Footer_github_btn__Na_XQ" href="https://twitter.com/anastasios_Gk"><img src="images/Twitter_Social_Icon_Circle_Color.png" width="28" height="28" alt="twitter link"/></a><a class="Footer_github_btn__Na_XQ" href="https://www.linkedin.com/in/agkaravelis/"><img src="images/LI-In-Bug.png" width="28" height="28" alt="linkedin link"/></a><a class="Footer_github_btn__Na_XQ" href="mailto:a.gkaravelis@gmail.com" target="_top"><svg xmlns="http://www.w3.org/2000/svg" width="28" height="28" viewBox="-2 -4 24 28" fill="none" stroke="rgb(20,20,20)" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"><path d="M0 0 L20 0 L20 20 L0 20 L0 0 L10 8 L20 0"></path></svg></a></div><small style="text-align:right">Anastasios Gkaravelis © 2020-<!-- -->2023</small></footer></div></div><script id="__NEXT_DATA__" type="application/json">{"props":{"pageProps":{}},"page":"/publications","query":{},"buildId":"mv3oqj_dj26xhTJXP5WBB","nextExport":true,"autoExport":true,"isFallback":false,"scriptLoader":[]}</script></body></html>