Skip to main content

Case studies Jun 2025 — Nov 2025 Migration framework for legacy services

Tuxedo to gRPC: modernizing legacy RPC safely

A Lex/Yacc transpiler, protocol shim, and runtime path that wrap legacy Tuxedo-style services with typed gRPC interfaces while preserving existing business logic.

Designer & implementation engineer gRPC Protobuf Lex/Yacc HAProxy Consul Nomad
Zero
Business-code changes
POC
Validated on production service
Typed
gRPC boundary
Incremental
Rollout model

The problem

A large telecom backend was running its core services on Oracle Tuxedo — a battle-tested transaction monitor that has carried billing, provisioning, and account-management workloads for some operators for over two decades. Tuxedo is genuinely good at the job it was designed for, but the cost surface had drifted out from under it.

Three problems compound:

  • C-only ecosystem lock-in. Services written for Tuxedo are typically C, and service-to-service communication uses Tuxedo’s flattened-buffer types — fixed-width, position-encoded blobs that are basically untyped on the wire. Wanting to write a new service in Go or Rust, or wire one into Kafka, means writing a translation layer for every interaction.
  • A platform dependency that shapes every decision. Tuxedo is not just a library in this kind of system; it influences runtime topology, language choices, operational practices, and migration economics. Any replacement path has to reduce that coupling gradually instead of pretending the old surface can disappear in one release.
  • No clean path to anything modern. Streaming events into Kafka, exposing services to a service mesh, deploying alongside Kubernetes-shaped workloads — every one of these wants to talk to a typed RPC, not a flattened-buffer transaction.

The question wasn’t really “how do we use less Tuxedo.” It was: how do you get off Tuxedo without rewriting decades of business logic the carrier cannot afford to invalidate?

What I actually built

The honest framing matters here, because it’s both more accurate and more impressive than the alternative.

I designed and shipped a migration framework — a generic protocol shim, a Lex/Yacc transpiler, an in-house orchestrator, and a runtime that any Tuxedo-shaped service can drop into. We validated it end-to-end on one production service as a proof of concept, then shaped it for broader rollout across carrier deployments with different operational constraints.

That framing — framework + POC + rollout path — is the real shape of the work. Calling it “migrated a backend” would have been an overclaim. Calling it a framework for incremental modernization is the real story: reusable platform work instead of a one-off rewrite.

The transpiler

The headline technical claim — “zero changes to business code” — is delivered by the piece of the framework I’m proudest of: a Lex/Yacc transpiler that takes a legacy contract definition and emits the entire gRPC scaffolding around it.

Every Tuxedo service is described by a contract file: a small, declarative definition of the service’s name, the request fields, the reply fields, their types, and how they’re laid out in the flat buffer. These contract files exist because the C compiler needs them to generate stub code. They are, fortunately, exactly the right input to feed a parser.

The transpiler does three things in one pass:

  1. Parses the contract file using a Yacc grammar (Lex tokenizer in front).
  2. Walks the parse tree, applying a datatype-mapping table to translate every C type into its Protobuf equivalent.
  3. Emits, from a single contract, three artifacts:
    • a fully-formed .proto file with the gRPC service and message definitions,
    • a C++ service-implementation header (.hpp) declaring each RPC method,
    • a C++ service-implementation source (.cpp) wrapping each legacy C function with the gRPC handler boilerplate.

Concretely, a sliced grammar fragment:

contract
: service_decl method_list
;
method_decl
: METHOD IDENTIFIER '(' field_list ')' RETURNS '(' field_list ')'
{ emit_proto_rpc($2, $4, $7);
emit_cpp_method_decl($2, $4, $7);
emit_cpp_method_impl($2, $4, $7); }
;
field_decl
: type_name IDENTIFIER ';' { collect_field($1, $2, NULLABLE_DEFAULT); }
| type_name IDENTIFIER NULLABLE ';' { collect_field($1, $2, NULLABLE_TRUE); }
;

A snippet of an emitted .proto:

syntax = "proto3";
package customer;
service CustomerService {
rpc GetMemo (GetMemoRequest) returns (GetMemoResponse);
rpc UpdateBalance (UpdateBalanceRequest) returns (UpdateBalanceResponse);
}
message GetMemoRequest {
string customer_id = 1;
uint32 memo_kind = 2;
google.protobuf.StringValue locale = 3; // nullable
}

And the corresponding C++ service-implementation source the transpiler also emits, automatically wrapping the legacy C function:

grpc::Status CustomerServiceImpl::GetMemo(
grpc::ServerContext* ctx,
const customer::GetMemoRequest* req,
customer::GetMemoResponse* resp) {
// 1. unmarshal Protobuf -> legacy flat buffer (codec table generated above)
CustomerMemoBuffer in_buf{};
pb_to_buf(*req, in_buf);
// 2. delegate to the unchanged legacy C function
CustomerMemoBuffer out_buf{};
int rc = legacy_get_customer_memo(&in_buf, &out_buf);
if (rc != 0) {
return ::grpc::Status(::grpc::StatusCode::INTERNAL,
legacy_strerror(rc));
}
// 3. marshal flat buffer -> Protobuf reply
buf_to_pb(out_buf, *resp);
return ::grpc::Status::OK;
}

The point here is leverage. One contract file in, three correct, consistent artifacts out. Every service migrated through the framework has the same shape, the same error semantics, the same null handling — because a deterministic transpiler emitted it. There’s no “we did this one a little differently” drift.

Same idea as the WebFOCUS auto-converter, applied to the protocol surface instead of the report surface. Parser-driven tools beat handwritten ones every time when the input language is yours to control.

Datatype mapping & null treatment

The unglamorous half of the transpiler is the datatype mapping table. C and Protobuf disagree on enough type semantics to make naive mapping unsafe.

The framework’s mapping table is short and explicit:

C typeProtobuf typeNull treatment
char[N] (fixed string)stringempty string ↔ \0-pad
char* (variable string)stringwrap in StringValue if nullable
int32_t / intsint32 (signed) / int32Int32Value if nullable
int64_t / long longsint64 / int64Int64Value if nullable
doubledoubleDoubleValue if nullable
Fixed-width BCD decimalsstring (canonical decimal)empty string ↔ null
Boolean flag ('Y'/'N')boolBoolValue if nullable
Sentinel-typed enumenum (with UNSPECIFIED = 0)0 is null

Two design decisions ride on this table.

Null is a first-class value, not a sentinel. Legacy C code uses every flavor of “this field is missing”: empty string, magic value -1, 'N', an explicit null flag in a side struct. The transpiler emits google.protobuf wrapper types (StringValue, Int32Value, etc.) for fields the contract marks as nullable, so the receiver always knows the difference between “zero” and “not set.” Every one of the legacy sentinels gets translated explicitly, in both directions.

Decimals stay decimals. Telecom billing is full of money fields that must not lose precision. We never let those touch double. Fixed-width BCD goes to canonical-decimal string; the consumer parses it with a decimal library on its own side.

The table is small, the consequences are large. Every service in every carrier gets the same treatment.

The protocol shim & sequence flow

The transpiler emits service-side scaffolding. The shim is the client-side piece — it sits between an unchanged legacy caller and the new gRPC service so the caller doesn’t know the protocol changed underneath it.

Translation flow for a single tpcall(). The shim runs a Lex/Yacc-generated codec to encode the legacy flat buffer into Protobuf, calls the gRPC service through HAProxy + Consul, and decodes the response back to a flat buffer that satisfies the legacy caller's struct layout.

Concretely, the shim takes a small declarative description of a Tuxedo buffer’s field layout and generates the encode / decode plumbing in both directions. A legacy C client keeps sending flattened buffers; a new gRPC service receives strongly-typed messages. A new gRPC client emits Protobuf; a still-legacy Tuxedo service receives a buffer that matches the C struct it has always expected.

Services don’t change. The shim changes the protocol surface — business logic, validation rules, database access, caching, every line of the domain layer — survives the migration untouched. That’s the leverage. Decades of hard-won correctness in billing and provisioning code does not get re-litigated.

Conversational services with session affinity

Most modern gRPC traffic is unary: one request, one response, stateless. Tuxedo applications don’t all fit that shape. A meaningful fraction of services are conversationaltpconnect opens a session, tpsend / tprecv exchange messages back and forth across that session, tpdiscon ends it. State lives on the server for the duration of the conversation.

This is the unique architectural contribution of the framework. The naive map (“just use server-streaming gRPC”) doesn’t work, because the load balancer will happily land subsequent calls on a different backend that doesn’t have the session state. The session has to be sticky.

The pattern we settled on:

  1. Conversational RPCs are bidirectional streaming gRPC. One stream per tpconnect. The shim opens it; closes it on tpdiscon.
  2. HAProxy routes by a session-affinity cookie, set on the first frame of the stream. Subsequent frames on the same connection follow the same route; subsequent connections that present the same session token also route to the same backend if it’s healthy.
  3. Service-side, sessions live in a small in-memory registry keyed by session token. The registry has a short TTL so a crashed client doesn’t leak state forever; the shim sends keep-alives over the stream to keep it alive while the user is still in the flow.
  4. Failover is explicit, not implicit. If a backend dies mid-conversation, the client sees an explicit STREAM_ABORTED and can retry from the start of the conversation rather than from the middle. Hidden retries on stateful sessions are how production data gets corrupted, so we don’t do them.

Same primitives as stateless gRPC; an extra agreement on top about how state flows. Documented once, used everywhere.

Multi-language client stub generation

Once a service has a .proto, every gRPC ecosystem can generate clients off it. The framework ships two, both auto-generated as part of the build:

  • Java clients via Maven protobuf-maven-plugin. A small POM template consumes the per-service .proto and emits a versioned JAR with strongly- typed stubs. JVM-side teams import the JAR like any other dependency.

    <build>
    <extensions>
    <extension>
    <groupId>kr.motd.maven</groupId>
    <artifactId>os-maven-plugin</artifactId>
    </extension>
    </extensions>
    <plugins>
    <plugin>
    <groupId>org.xolstice.maven.plugins</groupId>
    <artifactId>protobuf-maven-plugin</artifactId>
    <configuration>
    <protocArtifact>com.google.protobuf:protoc</protocArtifact>
    <pluginId>grpc-java</pluginId>
    <pluginArtifact>io.grpc:protoc-gen-grpc-java</pluginArtifact>
    </configuration>
    <executions>
    <execution><goals><goal>compile</goal><goal>compile-custom</goal></goals></execution>
    </executions>
    </plugin>
    </plugins>
    </build>
  • C++ clients via CMake. A reusable CMake function (grpc_generate(...)) wraps protoc plus the gRPC plugin and generates stubs at build time, no manual codegen step.

    find_package(Protobuf CONFIG REQUIRED)
    find_package(gRPC CONFIG REQUIRED)
    function(grpc_generate target proto_file)
    get_filename_component(proto_dir "${proto_file}" DIRECTORY)
    set(out_dir "${CMAKE_CURRENT_BINARY_DIR}/generated")
    file(MAKE_DIRECTORY "${out_dir}")
    add_custom_command(
    OUTPUT "${out_dir}/${target}.pb.cc" "${out_dir}/${target}.grpc.pb.cc"
    COMMAND protobuf::protoc
    ARGS --proto_path="${proto_dir}"
    --cpp_out="${out_dir}"
    --grpc_out="${out_dir}"
    --plugin=protoc-gen-grpc=$<TARGET_FILE:gRPC::grpc_cpp_plugin>
    "${proto_file}"
    DEPENDS "${proto_file}"
    )
    target_sources(${target} PRIVATE
    "${out_dir}/${target}.pb.cc"
    "${out_dir}/${target}.grpc.pb.cc")
    target_include_directories(${target} PRIVATE "${out_dir}")
    endfunction()

The same .proto drives both. Adding a third language (Go, Rust, Python) is a matter of plugging the appropriate protoc plugin into the same pipeline; no schema work changes.

In-house orchestrator

Tuxedo’s process management — start, stop, supervise, restart on failure, fan out across a cluster — is one of the legitimately good things it gives you for free. When the protocol changes underneath but the deployment model has to stay non-containerized for risk-control reasons, you have to replace process management with something equivalent.

The framework’s answer is an in-house orchestrator — three small pieces that together do what Tuxedo’s tlisten / tmboot / tmshutdown did, but generic across languages:

  • A Java Spring Boot brain. A control-plane service that knows the cluster topology, owns the desired-state model (“on this host, run these N replicas of these M services”), and reconciles toward it. Exposes a small REST API and a CLI surface.
  • Python host agents. A daemon on every host that takes commands from the brain, supervises child processes (gRPC services), reports health and resource usage upstream, and handles graceful drain on shutdown. Python was the right choice for the host agent specifically because it’s the one language already installed on every box in the fleet — no new dependency to ship.
  • A Python CLI. What an operator actually types. orctl deploy <service> --replicas 4 --host pool-A, orctl drain <host>, orctl status. The CLI hits the brain’s REST API; the brain decides the plan; the host agents execute it. Each command is idempotent and reports what it did.

The orchestrator slots in alongside Nomad rather than replacing it. Nomad handles the workloads that fit its model cleanly; the orchestrator handles the ones that have non-containerized deployment requirements (signed binaries on specific kernels, hardware-licensed workloads, regulatory constraints). Same external surface — register a service, scale a service, drain a host — different execution underneath.

The PowerBuilder client challenge

A footnote in the architecture, but a real one in the rollout: a non-trivial slice of legacy clients are PowerBuilder desktop apps that were written when “RPC” meant Tuxedo. PowerBuilder doesn’t speak HTTP/2, which gRPC needs. “Rewrite the desktop apps” was, predictably, not on the table.

The fix is a custom DLL — a thin gRPC-over-HTTP/2 transport written in C++, exposed as a PowerBuilder external function library. The DLL handles the HTTP/2 framing, TLS, and gRPC’s length-prefixed message format; the PowerBuilder side calls into it through a tiny synchronous API that looks almost exactly like the Tuxedo tpcall it replaces. Drop in the DLL; the rest of the desktop app keeps working.

Unglamorous, decisive. The rollout would not have completed without it.

The runtime

The transpiler is the small clever piece. The runtime is the unglamorous piece that makes it work in production.

Runtime: legacy Tuxedo clients hit the protocol shim, which speaks gRPC to the polyglot service mesh. HAProxy fronts the mesh; Consul publishes service identity; Consul Template syncs HAProxy config; Nomad orchestrates the containerized half; the in-house orchestrator handles the non-containerized half. Kafka unlocks event-driven flows once the migration completes.

Six components, each doing exactly one job:

  • HAProxy is the front door. Every incoming gRPC call lands here and gets load-balanced across whatever backends Consul currently knows about. HAProxy is rock-solid, well-understood, and replaceable — Envoy or NGINX could slot into the same role with no other changes.
  • Consul is the service registry. Every gRPC service registers itself on startup with health checks; Consul keeps the live map of “which services are healthy at which addresses.” Service discovery without a special client library — services look services up by name, not by address.
  • Consul Template is the bridge between Consul and HAProxy. It watches the Consul catalog, regenerates HAProxy’s config file when the topology changes, and triggers a reload. New service instances join the mesh in seconds without anyone editing a config file.
  • Nomad is the orchestrator for containerized workloads. Same shape as Kubernetes for our purposes; significantly simpler operational footprint.
  • The in-house orchestrator (above) handles the workloads that aren’t containerized.
  • Kafka is the open door. Once services speak gRPC, wiring them into event streams is no longer a special project — it’s just another service. Real-time billing event flows, audit pipelines, downstream analytics all become trivially available.

Picking HashiCorp tools (Consul + Nomad + Consul Template) was deliberate: each piece is replaceable, well-documented, OSS, and operationally calm. If a future adopter wants HAProxy → Envoy, Nomad → Kubernetes, or Consul → etcd, the swap is straightforward because nothing in the framework depends on the specific tool. It depends on the role each tool plays.

What shipped

The POC was a single production service migrated end-to-end. Same business logic, same data flows, same database — but now reachable over gRPC, observable through the new mesh, deployable through Nomad and the orchestrator, and (newly) able to publish into Kafka.

That POC is the proof. The rollout argument was: if this works for one service without changing business code, it can become the path for the rest. Different carrier deployments have different operational constraints, so the framework had to adapt around the service shape instead of forcing every system into one global migration plan.

If this works for one service without changing business code, it works for the rest.

— The framing that survived every review.

Lessons

A few things that turned out to matter more than I expected.

Honest framing scales better than aspirational framing. A reusable framework validated on a production service is a stronger signal than “migrated a backend” if the full fleet migration is still underway. It also survives a hostile interview question: “Did you really migrate the whole thing?” No — I built the framework and proved the path.

Migrate the protocol surface, not the application layer. The reason this finished in months instead of years is that no one had to re-litigate billing logic or provisioning rules. The schema for those domains stayed in the database and in the service code. What changed was the wire format. That’s a much smaller change than it looks like.

The transpiler is the leverage. Hand-writing one .proto per service and one wrapper per RPC is a six-month project times every service. Generating them from the existing contract files turns the same work into a few weeks of grammar work plus a forever-running build step. The expensive part of any migration is the cases where the team has to make a decision about each service; the cheap part is the cases where a deterministic tool emits the right answer.

Pick infrastructure pieces that each do one job. HAProxy doesn’t know about Consul. Consul doesn’t know about Nomad. Consul Template is the only piece that spans two — and it’s a thin daemon. Each component is individually swappable and operationally calm. The framework outlives any single tool choice.

The migration value is bigger than the code diff. The technical payoff is a typed service boundary. The organizational payoff is giving the program a repeatable way to modernize without reopening every piece of business logic.