Backend Code Methodology
Overview
- cdk
- core
- db
- context
- functions
- handlers
- workers
- utils-globals
- utils
CDK
This directory contains all AWS infrastructure code.
Core
The core directory houses the primary backend functionality. Here you will find all application-specific code, organized into the following sections:
- Functions: Endpoint functions are defined here, grouped by their corresponding endpoints.
- Handlers:
- Handler cron: Entry point for the cron handler
- Handler SQS: Lambda entry point for SQS handlers. This is where SQS messages are received and routed to the appropriate SQS function
- Handler API: Entry point for the API. This handler receives API requests and routes them to the specified function to handle the call
- Handler ad hoc: Contains any ad-hoc Lambda handlers that are needed. For example, in Bob Pay, this Lambda is used for FTP authentication
- Workers: Contains the batch worker implementation
- Utils: General utility functions specifically related to this project. These utility functions are typically project-specific and cannot be easily shared between projects.
Handler Functions
Handler functions are called from API endpoints. Each endpoint path is grouped in a single folder. For example, if the endpoint is /user, all API function calls (GET, POST, PATCH, DELETE) are grouped in the users folder.
Here is an example for the users endpoint:
"/users": {
"GET": users.GETUsers,
"POST": users.POSTUser,
"PATCH": users.PATCHUser,
"DELETE": users.DELETEUser,
},
Each of these functions should be defined in a separate file.
For example, in the users_get.go file we have:
func GETUsers(params GETUsersParams) (res []types.Users, err error) {
}
In the users_post.go file we have the POST function:
func POSTUser(_ struct{}, body POSTUserBody) (res types.User, err error) {
}
Handler Function Structure
A handler function consists of the following components:
- Body (for POST, PATCH, or DELETE requests) or parameters (for GET requests)
- Validation
- Handler function
Example
- In the
users_post.goexample, the body struct (or parameters struct for GET requests) is defined in the same file as the handler function. - A validation function (
Validate()) should be added to validate any fields. This validation function is called before the handler function executes. If the validation function returns an error, the handler function will not be called. - Handler function names always start with the HTTP method (POST, GET, PATCH, DELETE) followed by the resource name. The naming convention is typically HTTP Method + Type. In this example, it's POST + User.
- The first parameter in POST, PATCH, and DELETE functions is not used, but it is required for GET functions. The second parameter is the body, which is automatically populated by the framework.
- Where possible, return a defined struct and an error. If you need to return an
anytype, consider creating two separate endpoints instead.
Here is a full example of what a POST handler function might look like:
type POSTUserBody struct {
ID int64 `json:"id"`
Name string `json:"name"`
}
func (b *POSTUserBody) Validate() error {
if b.Name == "" {
return errors.Error("the user's 'name' is required")
}
return nil
}
func POSTUser(_ struct{}, body POSTUserBody) (res types.User, err error) {
}
The Handler Function
The handler function serves as the entry point for an API call. Without careful structuring, these functions can quickly grow in size and complexity. To maintain clean code, keep these functions small and separate the business logic into the utils package (explained below).
All the HTTP methods for the endpoints need to be in the same folder, including the SQS functions related to this endpoint.
Database functions
Database functions reside in the core/db/ directory and follow these principles:
- One file per table: Each database table has its own file (e.g.,
db_users.go,db_accounts.go,db_payments.go) - Query functions: Functions that query the database should be prefixed with
QueryorGet QueryUsers()- Returns a list with filtering and pagination. This is normally what is used from the APIGetUserByID()- Returns a single record by ID- Mutation functions: Functions that modify data use
Insert,Update,Upsert, orDeleteprefixes UpsertNote(),InsertTag(),UpdateConfig()- Helper functions: Database helper functions are in
helpers.gofor common query operations likeQueryWhereEqual(),QueryAddPaging(),DefaultFilter() - Separation of concerns: Handler functions should NEVER write raw SQL queries. All database operations must go through the db package functions.
- Reader/Writer separation: The system supports read replicas. Use
context.CurrentDB()which automatically routes GET requests to reader databases and mutations to writer databases. - Upserts: Prefer not to use upserts, when using upserts any column can be updated and you run the risk of updating unintend columns. Generaly when updating data onlya few columns are updated at a time, so rahter use the
Set()function to update the specific columns.
Transactions
For operations requiring multiple database operations to succeed or fail together, use transactions:
err := context.CurrentDB().RunInTx(context.Current, nil, func(ctx cont.Context, tx bun.Tx) error {
// All database operations here use the transaction
err := InsertInvoice(invoice, tx)
if err != nil {
return err // This will rollback the transaction
}
err = AddInvoiceItems(tx, *invoice)
if err != nil {
return err // This will rollback the transaction
}
return nil // This will commit the transaction
})
Raw SQL (When Necessary)
Only use raw SQL for complex reports that can't be expressed with the query builder:
func ReportBillingTransactions(fromDate time.Time, toDate time.Time) ([]types.BillingTransactionsReport, error) {
queryString := fmt.Sprintf(`
WITH transactions AS (
SELECT ...
)
SELECT * FROM transactions
WHERE transaction_date >= '%s' AND transaction_date <= '%s'`,
date_utils.DateDBFormattedString(fromDate),
date_utils.DateDBFormattedString(toDate))
var results []types.BillingTransactionsReport
err := context.CurrentDB().NewRaw(queryString).Scan(context.Current, &results)
return results, err
}
SQS
Use SQS for:
-
External API calls: Use SQS when making external API calls, never make an external API call "inline" from the handler function. Currently, the API gateway has a timeout of 30 seconds, and all API calls will timeout when taking longer than 30 seconds. There is a risk when calling an external API within our handler function because the external API call could take longer than 30 seconds, which will cause our API to timeout. For this reason, we always run external API calls in SQS.
-
Long-running operations: Any operation that takes more than a few seconds and that might possible timeout within 30 seconds (PDF generation, complex calculations, external API calls)
-
Operations requiring retries: When an external API call needs to be retried if it fails, SQS is a perfect fit. We re-add the message to the queue so that it can be processed again. There are built-in retry mechanisms (check the code for details).
SQS Handler Structure
SQS handlers follow these conventions:
File Naming:
- SQS handlers are defined in the same folder as their related endpoint
- File name:
{resource}_sqs.go(e.g.,billing_sqs.go,credit_card_payment_sqs.go)
Function Naming:
SQS{Description}(e.g.,SQSPayInvoices,SQSSendWebhook)
Function Signature:
func SQSSendWebhook(sqsMessage types.WebhookSQSMessage) error {
// Your processing logic here
return nil
}
- Takes a single parameter: the message struct
- Returns only an error (or nil on success)
- The framework automatically deserializes the message into the struct
Utils Packages
The utils package is where the core application logic resides. This is where features are implemented. For example, the utils package contains a users package with a CreateUser() function. While a user might only be created through the POST /users endpoint, we need to separate the business logic from the endpoint handlers and place it in a shared location—the utils package serves this purpose.
Key Principle: Features should be loosely coupled and self-contained. All business logic for a feature should be encapsulated within its package and live in isolation from the rest of the project. This means the package should not be tightly coupled to project-specific implementations.
As an example, the otp package generates and validates a "one-time pin" used to validate a user's phone number. All the business logic is contained within this package and lives in isolation. If other projects need to implement the otp feature, you should generally be able to copy it to them and reuse approximately 80% of the code. Think of the API endpoints as the user interface (the entry point) that users interact with, and the utils package as the backend where the business logic lives.
HTTP Status Codes
Error Wrapping
Always wrap errors in the APi functions with errors.HTTP() to set the status code:
if err != nil {
return res, errors.HTTP(http.StatusBadRequest, err, "could not create user")
}
Never return raw errors - always add context about what operation failed.
Caching with Redis
Redis is used for caching frequently accessed data to improve performance:
- User data: Cached after retrieval to reduce database lookups
- Cache invalidation: Always delete from cache when updating records (see
redis.DeleteUser()after updates) - Cache-aside pattern: Check Redis first, fallback to database, then populate cache
- Helper functions: Use utilities in
utils/redis/for consistent caching patterns
Example pattern:
// Check cache first
user := redis.GetUserByUserID(userID)
if user != nil {
return user, nil
}
// Fallback to database
user, err := db.GetUserByID(userID)
if err != nil {
return nil, err
}
// Populate cache
redis.SetUser(*user, nil)
return user, nil
Workers
Workers run as AWS Batch jobs for long-running background tasks:
When to use workers vs SQS:
- Workers: For scheduled batch operations (cron jobs), large data exports (CSV/PDF generation), reports
- SQS: For event-driven tasks, external API calls, operations triggered by user actions
Structure:
- Entry point:
core/workers/workers.go - Cron jobs:
core/workers/core/cron/- Scheduled tasks like billing runs - Jobs:
core/workers/core/jobs/- On-demand jobs triggered programmatically
Examples:
- Billing invoice generation (cron)
- CSV export generation (job)
- Account reconciliation (cron)
Context Package
The context package provides access to request-scoped information:
Key Functions:
context.Current- Access to current request context, claim, request IDcontext.CurrentDB()- Get database connection (automatically routes to reader/writer)context.IsStaffUser()- Check if current user is staffcontext.IsAccountUser()- Check if current user is an account user
Current Claim:
context.Current.Claim.UserID- ID of authenticated usercontext.Current.Claim.AccountID- Account ID of authenticated usercontext.Current.RequestID- Unique ID for this request (useful for tracing)
Why use context:
- Avoids passing database connections through every function
- Automatically handles read replica routing
- Provides consistent access to authentication information
Mental Model: How to Think About This Codebase
The Request Flow
API Request → Handler (validates & orchestrates) → Utils (business logic) → DB (persistence)
↓
SQS (async tasks, external APIs)
- API Endpoint - Receives HTTP request
- Handler Function - Validates input, orchestrates the flow
- Utils Package - Implements the core business logic
- DB Package - Talks to the database
- SQS (when needed) - Handles async operations
Key Principles
Separation of Concerns:
- Handlers = "What endpoint was called?"
- Utils = "What feature is being implemented?"
- DB = "How do we store/retrieve data?"
Think in Layers:
- Don't skip layers - handlers shouldn't write SQL
- Each layer has a purpose
- Lower layers (DB) don't know about higher layers (Handlers)
When in doubt:
- Look for similar existing code
- Follow the established patterns
- Ask: "Is this reusable?" → If yes, put it in utils
- Ask: "Does this talk to external services?" → If yes, use SQS
- Ask: "Could this take > 10 seconds?" → If yes, use SQS or workers