Technologies: Java, Spring, Ruby, AWS
I am currently working as part of the Vendor Self-Services team at Amazon where we implement features that allow vendors to offer their products on the retail website in an automated DIY manner.
Technologies: C++, OpenGL, GLSL, Qt
A new product being developed by Medicim Nobel Biocare for a partner company KLS Martin. This desktop application allows maxillo facial surgeons to plan their surgeries in 3D. Some of the more important features are:
Generation of a high-quality 3D model based on patient DICOM data
Tools for creating clinical cuts in the patient 3D model
Movement tools that allow precise and accurate movement of the cut parts
A tool for generating a 3D template design that can be 3D printed for use in surgeries
This screenshot shows the 3D workspace of IPS CaseDesigner. It shows the parts of the mandibula and maxilla bones created by clinical cuts, which can be moved by the surgeon in order to determine the ideal position. After this movement, the surgeon will be able to generate a 3D model of a template which can be 3D printed for use during surgery.
During my time on this project, I acted as a software team lead with the following responsibilities:
Ensuring that features are developed in a timely fashion
Scope features that would be implemented by the team
Design, implement and test complex software features
Technologies: C++, OpenGL, GLSL, Qt
NobelClinician is currently one of the products of Medicim Nobel Biocare. It provides a clinician with various tools to carefully plan a dental implant surgery. For example: the software features high-quality volume rendering of X-ray data for diagnostic purposes which can be used by the clinician to create a virtual setup of implants that will be converted into a 3D template shape with millimeter precision (which in turn can be ordered for production in Nobel Biocare's laboratories).
This screenshot shows one of the diagnostic workspaces of NobelClinician. At the top you can see the panoramic relice viewer, the cross-sectional reslice viewer is shown in the bottom left corner and the 3D viewer in the bottom right corner. Each of these viewers provide visualizations of the nerves, the implants and the generated 3D template.
I have been part of the development for two years and in that time I have had several responsibilities including the following:
Implementation of various algorithms: e.g. calculation of a 3D distance map, registration of 3D surface onto (volumetric) image, volume rendering of X-ray data, etc.
Design of various components in the software.
Enabling the use of unit tests in the software.
Acting as team lead on a two month project, which included extra tasks (in addition to the normal responsibilities as a developer) such as making sure all feature are finished on time, division of the work over the developers in the team and planning tasks for the next iteration.
A promotional video showing the workflow of OsseoCare Pro: first the clinician plans the surgery using NobelClinician, then he/she makes it available to OsseoCare Pro via the online cloud-platform NobelConnect. Once the surgery starts, the clinician can download the planning onto the iPad and start drilling.
This was a very cool project that was launched to facilitate dental implant surgery by enabling the surgeon to connect an iPad to the actual drilling machine. To accomplish this, we collaborated with another company that built the drilling machine (with iPad interface) and provided us with the necessary tools to control the drill from inside the iOS app that we were developing. The goal of the app was to make it possible for the surgeon to plan the surgery in NobelClinician (on the desktop) and then download that planning onto the iPad. The app would then nicely display all the steps required to execute the planning, while also configuring the drilling machine accordingly (actual control over the drill was still left up to the surgeon for safety purposes).
I was involved in developing various parts of the application, which included implementation of the UI, integrating the drilling machine library into the application (in such a way that it could easily be upgraded when new versions became available) and the implementation of a "report" feature that enabled the clinician to export important surgery data to a PDF file.
A promotional video showing how the NobelClinician app helps clinicians communicate to a patient how the dental implant surgery will proceed.
Two months after I started working for Medicim Nobel Biocare, I was assigned to team that was tasked with creating a mobile version of the NobelClinician software. The goal of the app is to make it easier for clinicians to communicate medical information and surgery planning data to patients. For instance, the app allows the clinician to show the patient how the implant surgery will be executed using a (pseudo) 3D viewer and X-ray images. For legal purposes, a signature feature was also added allowing the patient to sign off on the procedure.
The pseudo 3D view of the app. The user can drag up/down or left/right on the view and the model will turn in the corresponding direction. Alternatively, the user can also tap one of the listed items in the left column, which will cause the 3D view to center on the selected object.
My responsibilities for this project included all aspects of the application. Amongst other things, I implemented the rendering code for the signature and annotation features (which converted the "raw" drawing into a fluent curved spline), I built the UI from design made by the functional team and I worked on the pseudo-rendering of the 3D model (volume rendering was not possible at that time, so we opted for a screenshot based approach which gave the illusion of 3D rotation).
Technologies: Java, Android SDK
In 2011 our team was temporarily assigned a high-priority project that consisted of making a prototype that could prove whether or not it was possible to port the GIS rendering API of Luciad's flagship product (LuciadMap) to mobile devices running Android. Because development for Android is also done in Java, it turned out to be quite easy to get up and running quickly: most of the existing API could be reused and only a limited number of rendering classes needed to be rewritten to work with the android SDK.
In this project I was responsible for creating the demo application itself, which included creating the application, setting up the client-side code of some custom networking features for the demo and making sure the app was very stable (prospective customers would get a chance to play around with the app on their personal devices).
Despite the short time frame of one month, we succeeded in delivering a very stable demo application that impressed management and customers so much that a new product was launched under the name LuciadMobile. This is one of the projects I am very proud to have worked on because despite the lack of time, our team really pulled together to create a fully-featured stable demo app developed on a platform none of us had ever worked with befores.
Technologies: Java, OpenGL, GLSL, OpenCL
My first professional software project: Luciad wanted to create a new hardware-accelerated version of what was at that time their flagship product: LuciadMap. The main goal was to leverage the power of the GPU so that the new product would exhibit a vast performance improvement over LuciadMap's software rendering engine while still supporting the same features. An additional benefit of using hardware rendering was the opportunity to implement features that had not been feasible up until that point in time (e.g. due to the algorithm being too demanding for the CPU in combination with real-time rendering).
Screenshot of the Line-Of-Sight (LOS) feature of LuciadLightspeed. Initially implemented as a CPU algorithm, it took about 1 minute to compute a detailed LOS radius. By leveraging the parallel computing power of the GPU (via OpenCL), we were able to make it work in real-time (it took more or less 10ms to compute the same LOS radius).
In the first year of the LuciadLightSpeed project, our team was tasked to build a prototype that could serve as a proof-of-concept. During this time, I worked on several parts of the prototype such as the implementation of an octree datastructure, porting the existing line-of-sight algorithm to the GPU using OpenCL and building a demo application from scratch that could be used to impress customers (and management) with the various improvements that resulted from the prototype phase.
Screenshot of the projective texturing feature of LuciadLightspeed being used in the demo application: in the bottom-left corner a video feed is shown and in the 3D rendered view a model of a UAV is shown flying over the same 2D video feed being projected onto the terrain's 3D geometry.
Once the prototype was deemed a success, the project moved into the production phase. That is, our focus turned towards creating an easy-to-use API that closely resembled the existing API of LuciadMap so that customers could migrate effortlessly towards the new product. My responsibilities in this phase included designing parts of the API, writing documentation, fixing bugs and further polishing the algorithms I had worked on during prototyping to make them production ready.