All case studies
CurrentBadabomKorea Institute of Marine Science & Technology Promotion (KIMST)Jan 2026 - Present

DEVOPS / OBSERVABILITY

SSE + Cross-WAS Real-time Log Viewer

The WAS was in the Daejeon IDC, but network segregation policy meant the only PCs that could reach it were at the Busan office. So checking a log basically required making the trip down to Busan. I built a viewer that streams WAS logs in real time over the web, and added a Cross-WAS relay so logs across the two nodes all show up in a single screen.

Access constraint
Busan PC ONLY

Daejeon WAS → Busan PC only

Transport
SSE

Cleaner access logs than polling

Multi-node
Cross-WAS relay

Independent of connected WAS

Auth
4-layer

Admin / IP / PW / internal token

Spring MVCServer-Sent EventsRandomAccessFileHttpURLConnectionProxy TokenJEUS (2 nodes)eGovFrame

Problem

Checking a log meant going to Busan

The WAS lives in the Daejeon IDC. Under the network segregation policy, the only PCs allowed to connect to it were at a single Busan office. That made the official access path literally "travel to the Busan office → sit at a designated PC → SSH into the Daejeon WAS from there." If I was working in Seoul and needed to see a log, my options were two — take the trip down to Busan, or ping someone in Busan to "please grab me the logs for this exact window." When an incident hit, decisions had to happen minute by minute, but before anything could move, a messenger thread asking "who's in front of the Busan PC right now?" had to go around first.

On top of that, the WAS was split across two nodes, so tracing the cause of an incident often meant cross-checking both servers. Opening two SSH windows on the Busan PC and flipping between Daejeon WAS 1 and WAS 2 was the default routine, and asking Busan for a remote favor was the other half of the loop.

At first I considered a simple polling-based viewer that mimicked tail, but this time the log viewer itself would pile up so many access log entries that it would drown the original logs it was meant to show. Reading the whole log file on every request wasn't an option either — the files were simply too large.

  • Only Busan PCs could reach the Daejeon WAS

    Network segregation blocked direct SSH from Seoul or Daejeon. To see a log, someone had to go to Busan, or ask the Busan staffer every single time.

  • Cross-checking multi-WAS nodes

    With the WAS split across two boxes, looking at just one often wasn't enough. Two SSH windows on the Busan PC, flipping between WAS 1 and WAS 2, was the baseline setup.

  • Polling pollutes the access log

    A short-interval polling loop mimicking tail would flood the access log with endless viewer requests — the viewer itself becomes a noise source that buries the real logs.

  • Can't read a huge file end-to-end every request

    Log files running tens of MB to GB meant loading the whole thing per request would blow up both memory and response time. I needed a way to read only the exact tail window.

Approach

Pull it into the admin web — SSE + Cross-WAS relay

The core idea was simple. The only pre-approved path to the server ran through the Badabom admin web management tools, so the log viewer should live inside that too. The admin web already operated within Badabom's internal network, so existing access paths were reusable, and admins could use it from Seoul or Busan as long as they could log in.

Real-time delivery was handled with SSE (Server-Sent Events). Polling would have turned the viewer into a log-noise source, so a single long-lived connection with the server pushing data was the right fit. The multi-node problem was solved with a Cross-WAS SSE relay — if I'm connected to WAS 1 but want to see WAS 2's logs, WAS 1 opens its own SSE connection to WAS 2's stream.do and forwards every event it receives straight back to the client. The user just flips a node switch and doesn't need to care which node they landed on.

For large log files I used RandomAccessFile to read from the end in reverse, pulling only the tail N lines without any memory pressure. When log rotation shrinks the file length, the code detects that and resets position to 0 as well.

LOCAL READ

Connected WAS = target

Admin Browser

EventSource

SSE · single connection

WAS 1

/adm/sys/log/stream.do

  • SSE connection opened
  • readLogForward loop
File I/O

log file (WAS 1)

/logs/badabom.log

  • RandomAccessFile
  • Read from the tail

CROSS-WAS RELAY

Connected WAS ≠ target

Admin Browser

EventSource

SSE · single connection

WAS 1

/adm/sys/log/stream.do

  • Detect wasNode=2
  • Open remote stream.do
SSE relay · internal token

WAS 2

/adm/sys/log/stream.do

  • PROXY_TOKEN validation
  • Send SSE stream
File I/O

log file (WAS 2)

/logs/badabom.log

  • RandomAccessFile

Why SSE

Cleaner access logs than polling — the viewer itself stops being a noise source

Cross-WAS

See logs of a WAS you're not even connected to, all on one screen — no SSH juggling

Windowed Tail

Read only the last N bytes with RandomAccessFile — safe for huge files

Process

Implementation steps

  1. 01

    SSE stream — 10-minute timeout + heartbeat

    I open the SSE response with the text/event-stream header and check for file changes every second to push new lines. A 10-minute timeout prevents stale zombie connections from hanging around, and a single-line comment heartbeat is emitted on 30 seconds of idle so intermediate proxies don't tear the connection down. If the file size shrinks, I treat it as a rotation and reset position to 0.

    AdmServerLogController.java
    java
    // SSE local file streaming — server pushes over a single connection
    response.setContentType("text/event-stream");
    response.setCharacterEncoding("UTF-8");
    response.setHeader("Cache-Control", "no-cache");
    response.setHeader("Connection", "keep-alive");
    response.setHeader("X-Accel-Buffering", "no"); // Prevent Nginx proxy buffering
    response.flushBuffer();
     
    PrintWriter writer = response.getWriter();
    long position = lastPosition;
    long startTime = System.currentTimeMillis();
    long maxDuration = 10 * 60 * 1000L; // 10-minute timeout
    int idleCount = 0;
     
    while (!writer.checkError()) {
    if (System.currentTimeMillis() - startTime > maxDuration) {
    writer.write("event: timeout\ndata: {\"message\":\"Connection closing after 10 minutes.\"}\n\n");
    writer.flush();
    break;
    }
     
    long fileLength = file.length();
    if (fileLength > position) {
    Map<String, Object> data = readLogForward(file, position, level, keyword);
    List<?> lines = (List<?>) data.get("lines");
    position = ((Number) data.get("position")).longValue();
    if (!lines.isEmpty()) {
    writer.write("event: log\ndata: ");
    writer.write(objectMapper.writeValueAsString(data));
    writer.write("\n\n");
    writer.flush();
    }
    idleCount = 0;
    } else if (fileLength < position) {
    position = 0; // Log rotation detected → restart from the beginning
    } else {
    idleCount++;
    if (idleCount >= 30) { // 30s idle → heartbeat
    writer.write(": heartbeat\n\n");
    writer.flush();
    idleCount = 0;
    }
    }
    Thread.sleep(1000);
    }
  2. 02

    Cross-WAS SSE relay

    When the target node is a different WAS, the current WAS attaches to that node's stream.do as an SSE client and forwards each line straight to its own output stream. The key is setReadTimeout(0) — because SSE is a long-lived connection, the read timeout is unlimited, and auth between internal WAS nodes is handled via a dedicated proxy token header. Since the relay is a long SSE connection rather than polling, traffic and request counts stay minimal too.

    AdmServerLogController.java
    java
    // Cross-WAS SSE relay — long-lived connection to the remote WAS's stream.do, forwarded line by line
    private void relayRemoteStream(PrintWriter writer, String wasNode,
    String fileName, long lastPosition, String level, String keyword) {
    HttpURLConnection conn = null;
    try {
    String nodeUrl = getProperty("server.log." + wasNode + ".url");
    String token = getProperty("server.log.proxy.token");
     
    String urlStr = nodeUrl + "/adm/sys/log/stream.do"
    + "?fileName=" + URLEncoder.encode(fileName, "UTF-8")
    + "&lastPosition=" + lastPosition
    + "&level=" + URLEncoder.encode(level, "UTF-8")
    + "&keyword=" + URLEncoder.encode(keyword, "UTF-8");
     
    conn = (HttpURLConnection) new URL(urlStr).openConnection();
    conn.setRequestMethod("GET");
    conn.setRequestProperty(PROXY_TOKEN_HEADER, token); // Auth between internal WAS nodes
    conn.setRequestProperty("Accept", "text/event-stream");
    conn.setConnectTimeout(5000);
    conn.setReadTimeout(0); // SSE long connection — no read timeout
     
    // Forward SSE event lines verbatim — flush on blank lines
    try (BufferedReader reader = new BufferedReader(
    new InputStreamReader(conn.getInputStream(), UTF_8))) {
    String line;
    while ((line = reader.readLine()) != null && !writer.checkError()) {
    writer.write(line + "\n");
    if (line.isEmpty()) writer.flush();
    }
    }
    } catch (Exception e) {
    writer.write("event: error\ndata: {\"message\":\"Remote streaming error\"}\n\n");
    writer.flush();
    } finally {
    if (conn != null) conn.disconnect();
    }
    }
  3. 03

    Reading large log tails (RandomAccessFile)

    Instead of loading the whole file into memory, I use RandomAccessFile to read the last N bytes and pull lines in reverse order. When level or keyword filters are applied, the window auto-expands to a larger buffer (512KB) until enough post-filter lines reach the requested count. Continuation is wired up with an offset returned to the client, which the frontend uses to drive infinite scroll.

    AdmServerLogController.java
    java
    // Read the file tail in reverse — pull only the last N lines without loading the whole thing
    private Map<String, Object> readLogTail(File file, long fromOffset,
    int lineCount, String level, String keyword) throws Exception {
    List<String> resultLines = new ArrayList<>();
     
    try (RandomAccessFile raf = new RandomAccessFile(file, "r")) {
    long fileLength = raf.length();
    long readEnd = (fromOffset > 0 && fromOffset < fileLength) ? fromOffset : fileLength;
     
    // When filtering, read a larger window (until lineCount lines remain after filter)
    boolean filtering = !"ALL".equalsIgnoreCase(level) || StringUtils.isNotBlank(keyword);
    int bufferSize = Math.min((int)(readEnd), filtering ? 1024 * 512 : 1024 * 64);
    long readStart = Math.max(0, readEnd - bufferSize);
     
    byte[] buffer = new byte[(int)(readEnd - readStart)];
    raf.seek(readStart);
    raf.readFully(buffer);
     
    String[] allLines = new String(buffer, UTF_8).split("\\r?\\n");
    for (int i = allLines.length - 1; i >= 0 && resultLines.size() < lineCount; i--) {
    String line = allLines[i];
    if (StringUtils.isBlank(line)) continue;
    if (filterLevel && !line.toUpperCase().contains(levelUpper)) continue;
    if (filterKeyword && !line.toLowerCase().contains(keywordLower)) continue;
    resultLines.add(line);
    }
    }
    return result; // lines + offset (for hasMore calc) + fileSize
    }
  4. 04

    Layered auth — internal proxy / admin / IP / password

    Logs can contain sensitive data, so access control is stacked. Internal WAS-to-WAS traffic passes lightly with just a proxy token, while external users must clear admin role + IP whitelist + secondary password, all four. Failing any one of them returns 403/401 immediately.

    AdmServerLogController.java
    java
    // Layered auth — internal proxy / admin role / IP / password
    private ModelAndView checkSecurity(HttpServletRequest request) {
    // 1) Internal WAS-to-WAS traffic passes with just the proxy token
    String proxyToken = request.getHeader(PROXY_TOKEN_HEADER);
    if (StringUtils.isNotBlank(proxyToken)) {
    String configToken = getProperty("server.log.proxy.token");
    if (proxyToken.equals(configToken)) return null;
    return jsonForm.modelAndViewJson("403", "Invalid proxy token", null);
    }
     
    // 2) External users must be logged in as admin
    if (!isAdminLoggedIn()) {
    return jsonForm.modelAndViewJson("403", "Admin login required", null);
    }
    // 3) IP whitelist (optional)
    if (!isAllowedIp(request)) {
    return jsonForm.modelAndViewJson("403", "IP not allowed", null);
    }
    // 4) Secondary password auth (separate modal on viewer entry)
    if (!isAuthenticated(request)) {
    return jsonForm.modelAndViewJson("401", "Authentication required", null);
    }
    return null;
    }

Outcome

Results and lessons

  • The "trip to Busan" constraint is gone

    Incident triage no longer starts with "is anyone at the Busan PC right now?" Even when I'm working from Seoul, a single admin login gets me streaming logs in real time, and the response speed shift is visibly faster.

  • Cross-checking both nodes on one screen

    Thanks to the Cross-WAS relay, the target WAS can be picked independently of which node the user connected to, so "which node am I on again?" simply stops being a question. If this scales out to 3+ nodes later, the same pattern plugs right in.

  • SSE was the right call from an ops perspective too

    If I had gone with polling, the log viewer would have piled access log entries endlessly and buried the real logs — the very paradox I was trying to avoid. This project made me feel the value of SSE not just as a "feature" choice but as "don't pollute the operator's logs."

  • Currently tail-only; roadmap ahead

    When central log aggregation (ELK/Loki and friends) lands, parts of this viewer's role could be absorbed by that. Even so, meaningfully cutting the "Busan dependency" until then was worth it on its own. And even after central aggregation arrives, this stays useful as the direct-to-WAS path operators can still reach for when they want to see raw logs at the source.

MORE

Explore other cases

Badabom

AUTH / SSO

Building an SSO Provider for Partner Sites

Implemented an SSO Provider so external partner sites (e.g., OTT) could sign in with Badabom accounts. Single-use UUID tokens stored in the database support multiple WAS nodes, and CI (Connecting Information) auto-maps accounts across both sides.

View detail

Badabom

LEGACY MIGRATION

Migrating the OTT Technology-Trade System into Badabom

Moved an Oracle + MyBatis technology-trade platform (OTT) onto PostgreSQL + iBATIS. Rewrote 87 URLs, 34 JSPs, 80+ SQL queries, and 14 tables.

View detail

GAIS — Government Advertising Integrated Support System

CI/CD

Automating the Build and Deploy Pipeline

Replaced a fully manual build-and-deploy workflow with a Jenkins + GitLab Webhook pipeline, cutting deploy time from 15–20 min down to around 4 min.

View detail

GAIS — Government Advertising Integrated Support System

INFRA / SESSION

Redis-Backed Session Clustering

JEUS Standard doesn't support native session clustering, so I put Redis in front as an external session store. That unlocked rolling restarts across WAS nodes.

View detail

GAIS — Government Advertising Integrated Support System

SECURITY / NETWORK

Applying TLS 1.3 via an Nginx Reverse Proxy

Touching the shared WebtoB SSL felt risky, so I put Nginx in front and terminated TLS there instead. Existing services kept running untouched while TLS 1.3 was rolled out.

View detail

Freelance · Side Projects

CLIENT WORK / WEB

Pitched and Built a Postpartum Care Center Site Renewal

My wife had stayed at a postpartum care center whose website felt dated, so I mocked up a UI sample and pitched it myself. I built an Astro static site with a 192-frame scroll animation, Kakao Map, and SEO — then shipped it to their production domain.

View detail

Freelance · Side Projects

SIDE PROJECT / AI

Family-Driven Baby Naming with AI + Tournament-Style Voting

Existing naming services are designed for solo use, so I built a way for the whole family to join in. GPT-4o suggests names aligned with Saju (birth-chart) and Ohaeng (Five-Element) rules, and the family votes tournament-style to pick the final name.

View detail
Real-time Log Viewer | Case Study