マイクロサービスアーキテクチャ実践ガイド 2025年版 - 設計から運用まで
マイクロサービスアーキテクチャの設計原則、実装パターン、運用のベストプラクティスを包括的に解説。Docker、Kubernetes、gRPC、サービスメッシュまで実践的な内容をカバーします。
約5分で読めます
技術記事
実践的
この記事のポイント
マイクロサービスアーキテクチャの設計原則、実装パターン、運用のベストプラクティスを包括的に解説。Docker、Kubernetes、gRPC、サービスメッシュまで実践的な内容をカバーします。
この記事では、実践的なアプローチで技術的な課題を解決する方法を詳しく解説します。具体的なコード例とともに、ベストプラクティスを学ぶことができます。
はじめに
マイクロサービスアーキテクチャは、大規模なアプリケーションを小さな独立したサービスに分割する設計手法です。2025年現在、クラウドネイティブな開発において標準的なアプローチとなっています。本記事では、実践的な観点からマイクロサービスの設計、実装、運用について解説します。
マイクロサービスアーキテクチャの基本原則
アーキテクチャ概要
graph TB subgraph "API Gateway" GW[API Gateway] end subgraph "Microservices" US[User Service] OS[Order Service] PS[Product Service] NS[Notification Service] IS[Inventory Service] end subgraph "Message Broker" MB[RabbitMQ/Kafka] end subgraph "Data Layer" UD[(User DB)] OD[(Order DB)] PD[(Product DB)] ID[(Inventory DB)] end subgraph "Infrastructure" SD[Service Discovery] CM[Config Management] LG[Centralized Logging] MT[Distributed Tracing] end GW --> US GW --> OS GW --> PS US --> UD OS --> OD PS --> PD IS --> ID OS --> MB MB --> NS MB --> IS US --> SD OS --> SD PS --> SD NS --> SD IS --> SD
設計原則
-
単一責任の原則(SRP)
- 各サービスは一つの業務機能に特化
- 変更の理由は一つだけ
-
疎結合
- サービス間の依存を最小限に
- インターフェースを通じた通信
-
高凝集
- 関連する機能を同一サービス内に配置
- データとビジネスロジックの一体化
サービスの設計と実装
ドメイン駆動設計(DDD)による境界付けコンテキスト
// user-service/domain/user.go
package domain
import (
"errors"
"time"
)
type UserID string
type User struct {
ID UserID
Email Email
Name string
CreatedAt time.Time
UpdatedAt time.Time
}
type Email struct {
value string
}
func NewEmail(value string) (Email, error) {
if !isValidEmail(value) {
return Email{}, errors.New("invalid email format")
}
return Email{value: value}, nil
}
func (e Email) String() string {
return e.value
}
// UserRepository インターフェース
type UserRepository interface {
Save(user *User) error
FindByID(id UserID) (*User, error)
FindByEmail(email Email) (*User, error)
Delete(id UserID) error
}
// UserService ドメインサービス
type UserService struct {
repo UserRepository
}
func NewUserService(repo UserRepository) *UserService {
return &UserService{repo: repo}
}
func (s *UserService) RegisterUser(email string, name string) (*User, error) {
emailVO, err := NewEmail(email)
if err != nil {
return nil, err
}
// 重複チェック
existing, _ := s.repo.FindByEmail(emailVO)
if existing != nil {
return nil, errors.New("email already exists")
}
user := &User{
ID: UserID(generateUUID()),
Email: emailVO,
Name: name,
CreatedAt: time.Now(),
UpdatedAt: time.Now(),
}
if err := s.repo.Save(user); err != nil {
return nil, err
}
return user, nil
}
gRPCによるサービス間通信
// proto/user.proto
syntax = "proto3";
package user.v1;
option go_package = "github.com/example/user-service/api/v1;userv1";
service UserService {
rpc CreateUser(CreateUserRequest) returns (CreateUserResponse);
rpc GetUser(GetUserRequest) returns (GetUserResponse);
rpc UpdateUser(UpdateUserRequest) returns (UpdateUserResponse);
rpc DeleteUser(DeleteUserRequest) returns (DeleteUserResponse);
}
message CreateUserRequest {
string email = 1;
string name = 2;
}
message CreateUserResponse {
User user = 1;
}
message User {
string id = 1;
string email = 2;
string name = 3;
int64 created_at = 4;
int64 updated_at = 5;
}
message GetUserRequest {
string id = 1;
}
message GetUserResponse {
User user = 1;
}
gRPCサーバー実装
// user-service/api/grpc/server.go
package grpc
import (
"context"
userv1 "github.com/example/user-service/api/v1"
"github.com/example/user-service/domain"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
)
type UserServiceServer struct {
userv1.UnimplementedUserServiceServer
userService *domain.UserService
}
func NewUserServiceServer(userService *domain.UserService) *UserServiceServer {
return &UserServiceServer{
userService: userService,
}
}
func (s *UserServiceServer) CreateUser(
ctx context.Context,
req *userv1.CreateUserRequest,
) (*userv1.CreateUserResponse, error) {
user, err := s.userService.RegisterUser(req.Email, req.Name)
if err != nil {
return nil, status.Errorf(codes.InvalidArgument, "failed to create user: %v", err)
}
return &userv1.CreateUserResponse{
User: domainUserToProto(user),
}, nil
}
func (s *UserServiceServer) GetUser(
ctx context.Context,
req *userv1.GetUserRequest,
) (*userv1.GetUserResponse, error) {
user, err := s.userService.GetUser(domain.UserID(req.Id))
if err != nil {
if errors.Is(err, domain.ErrUserNotFound) {
return nil, status.Errorf(codes.NotFound, "user not found")
}
return nil, status.Errorf(codes.Internal, "failed to get user: %v", err)
}
return &userv1.GetUserResponse{
User: domainUserToProto(user),
}, nil
}
イベント駆動アーキテクチャ
Apache Kafkaによるイベントストリーミング
// event/producer.go
package event
import (
"encoding/json"
"github.com/confluentinc/confluent-kafka-go/kafka"
)
type EventProducer struct {
producer *kafka.Producer
}
func NewEventProducer(brokers string) (*EventProducer, error) {
p, err := kafka.NewProducer(&kafka.ConfigMap{
"bootstrap.servers": brokers,
"acks": "all",
"retries": 5,
})
if err != nil {
return nil, err
}
return &EventProducer{producer: p}, nil
}
type UserCreatedEvent struct {
EventID string `json:"event_id"`
UserID string `json:"user_id"`
Email string `json:"email"`
Name string `json:"name"`
Timestamp int64 `json:"timestamp"`
}
func (p *EventProducer) PublishUserCreated(event UserCreatedEvent) error {
value, err := json.Marshal(event)
if err != nil {
return err
}
topic := "user.created"
message := &kafka.Message{
TopicPartition: kafka.TopicPartition{
Topic: &topic,
Partition: kafka.PartitionAny,
},
Key: []byte(event.UserID),
Value: value,
}
return p.producer.Produce(message, nil)
}
イベントコンシューマー
// notification-service/consumer/consumer.go
package consumer
import (
"encoding/json"
"log"
"github.com/confluentinc/confluent-kafka-go/kafka"
)
type NotificationConsumer struct {
consumer *kafka.Consumer
notifier *NotificationService
}
func NewNotificationConsumer(brokers string, notifier *NotificationService) (*NotificationConsumer, error) {
c, err := kafka.NewConsumer(&kafka.ConfigMap{
"bootstrap.servers": brokers,
"group.id": "notification-service",
"auto.offset.reset": "earliest",
})
if err != nil {
return nil, err
}
return &NotificationConsumer{
consumer: c,
notifier: notifier,
}, nil
}
func (nc *NotificationConsumer) Start() error {
topics := []string{"user.created", "order.placed"}
err := nc.consumer.SubscribeTopics(topics, nil)
if err != nil {
return err
}
for {
msg, err := nc.consumer.ReadMessage(-1)
if err != nil {
log.Printf("Consumer error: %v\n", err)
continue
}
switch *msg.TopicPartition.Topic {
case "user.created":
nc.handleUserCreated(msg.Value)
case "order.placed":
nc.handleOrderPlaced(msg.Value)
}
}
}
func (nc *NotificationConsumer) handleUserCreated(data []byte) {
var event UserCreatedEvent
if err := json.Unmarshal(data, &event); err != nil {
log.Printf("Failed to unmarshal event: %v\n", err)
return
}
// ウェルカムメール送信
err := nc.notifier.SendWelcomeEmail(event.Email, event.Name)
if err != nil {
log.Printf("Failed to send welcome email: %v\n", err)
}
}
API Gateway パターン
Kong Gateway の設定
# kong.yml
_format_version: "3.0"
services:
- name: user-service
url: http://user-service:50051
protocol: grpc
routes:
- name: user-routes
protocols:
- grpc
paths:
- /user.v1.UserService/
plugins:
- name: rate-limiting
config:
minute: 60
policy: local
- name: jwt
config:
secret_is_base64: false
claims_to_verify:
- exp
- name: order-service
url: http://order-service:50052
protocol: grpc
routes:
- name: order-routes
protocols:
- grpc
paths:
- /order.v1.OrderService/
plugins:
- name: rate-limiting
config:
minute: 30
policy: local
- name: request-transformer
config:
add:
headers:
- x-service-name:order-service
plugins:
- name: prometheus
config:
status_code_metrics: true
latency_metrics: true
bandwidth_metrics: true
- name: correlation-id
config:
header_name: X-Correlation-ID
generator: uuid
echo_downstream: true
データ管理戦略
データベース per サービス
// order-service/repository/postgres.go
package repository
import (
"database/sql"
"fmt"
_ "github.com/lib/pq"
)
type PostgresOrderRepository struct {
db *sql.DB
}
func NewPostgresOrderRepository(connectionString string) (*PostgresOrderRepository, error) {
db, err := sql.Open("postgres", connectionString)
if err != nil {
return nil, err
}
// Connection pooling設定
db.SetMaxOpenConns(25)
db.SetMaxIdleConns(5)
db.SetConnMaxLifetime(5 * time.Minute)
return &PostgresOrderRepository{db: db}, nil
}
func (r *PostgresOrderRepository) CreateOrder(order *Order) error {
tx, err := r.db.Begin()
if err != nil {
return err
}
defer tx.Rollback()
query := `
INSERT INTO orders (id, user_id, total_amount, status, created_at)
VALUES ($1, $2, $3, $4, $5)
`
_, err = tx.Exec(query, order.ID, order.UserID, order.TotalAmount, order.Status, order.CreatedAt)
if err != nil {
return err
}
// Order items の挿入
for _, item := range order.Items {
itemQuery := `
INSERT INTO order_items (id, order_id, product_id, quantity, price)
VALUES ($1, $2, $3, $4, $5)
`
_, err = tx.Exec(itemQuery, item.ID, order.ID, item.ProductID, item.Quantity, item.Price)
if err != nil {
return err
}
}
return tx.Commit()
}
分散トランザクション - Sagaパターン
// saga/order_saga.go
package saga
import (
"context"
"log"
)
type OrderSaga struct {
orderService OrderService
paymentService PaymentService
inventoryService InventoryService
eventProducer EventProducer
}
type SagaStep struct {
name string
transaction func() error
compensation func() error
}
func (s *OrderSaga) CreateOrder(ctx context.Context, req CreateOrderRequest) error {
sagaID := generateSagaID()
steps := []SagaStep{
{
name: "create_order",
transaction: func() error {
return s.orderService.CreateOrder(req)
},
compensation: func() error {
return s.orderService.CancelOrder(req.OrderID)
},
},
{
name: "reserve_inventory",
transaction: func() error {
return s.inventoryService.ReserveItems(req.Items)
},
compensation: func() error {
return s.inventoryService.ReleaseItems(req.Items)
},
},
{
name: "process_payment",
transaction: func() error {
return s.paymentService.ProcessPayment(req.UserID, req.TotalAmount)
},
compensation: func() error {
return s.paymentService.RefundPayment(req.PaymentID)
},
},
}
completedSteps := []SagaStep{}
for _, step := range steps {
log.Printf("Executing saga step: %s", step.name)
if err := step.transaction(); err != nil {
log.Printf("Saga step %s failed: %v", step.name, err)
// 補償トランザクションを実行
for i := len(completedSteps) - 1; i >= 0; i-- {
compensationStep := completedSteps[i]
log.Printf("Compensating: %s", compensationStep.name)
if err := compensationStep.compensation(); err != nil {
log.Printf("Compensation failed for %s: %v", compensationStep.name, err)
}
}
return fmt.Errorf("saga failed at step %s: %w", step.name, err)
}
completedSteps = append(completedSteps, step)
}
// Saga完了イベントを発行
s.eventProducer.PublishOrderCompleted(OrderCompletedEvent{
SagaID: sagaID,
OrderID: req.OrderID,
})
return nil
}
コンテナ化とオーケストレーション
Dockerfile の最適化
# user-service/Dockerfile
# Build stage
FROM golang:1.21-alpine AS builder
RUN apk add --no-cache git ca-certificates
WORKDIR /app
# Dependencies をキャッシュ
COPY go.mod go.sum ./
RUN go mod download
# ソースコードをコピー
COPY . .
# ビルド
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o user-service ./cmd/server
# Runtime stage
FROM scratch
# CA証明書をコピー
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
# バイナリをコピー
COPY --from=builder /app/user-service /user-service
# ヘルスチェック
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD ["/user-service", "health"]
EXPOSE 50051
ENTRYPOINT ["/user-service"]
Kubernetes マニフェスト
# k8s/user-service.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service
namespace: microservices
spec:
replicas: 3
selector:
matchLabels:
app: user-service
template:
metadata:
labels:
app: user-service
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9090"
spec:
containers:
- name: user-service
image: myregistry/user-service:v1.0.0
ports:
- containerPort: 50051
name: grpc
- containerPort: 9090
name: metrics
env:
- name: DB_HOST
valueFrom:
secretKeyRef:
name: user-db-secret
key: host
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: user-db-secret
key: password
- name: JAEGER_AGENT_HOST
value: jaeger-agent.observability
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"
livenessProbe:
grpc:
port: 50051
initialDelaySeconds: 10
periodSeconds: 10
readinessProbe:
grpc:
port: 50051
initialDelaySeconds: 5
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: user-service
namespace: microservices
spec:
selector:
app: user-service
ports:
- port: 50051
targetPort: 50051
name: grpc
- port: 9090
targetPort: 9090
name: metrics
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: user-service-hpa
namespace: microservices
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: user-service
minReplicas: 3
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
サービスメッシュ - Istio
Istio設定
# istio/virtual-service.yaml
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: user-service
namespace: microservices
spec:
hosts:
- user-service
http:
- match:
- headers:
x-version:
exact: v2
route:
- destination:
host: user-service
subset: v2
weight: 100
- route:
- destination:
host: user-service
subset: v1
weight: 90
- destination:
host: user-service
subset: v2
weight: 10
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: user-service
namespace: microservices
spec:
host: user-service
trafficPolicy:
connectionPool:
tcp:
maxConnections: 100
http:
http1MaxPendingRequests: 50
http2MaxRequests: 100
loadBalancer:
simple: LEAST_CONN
outlierDetection:
consecutiveErrors: 5
interval: 30s
baseEjectionTime: 30s
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
監視とロギング
分散トレーシング - OpenTelemetry
// tracing/tracer.go
package tracing
import (
"context"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/exporters/jaeger"
"go.opentelemetry.io/otel/sdk/resource"
sdktrace "go.opentelemetry.io/otel/sdk/trace"
semconv "go.opentelemetry.io/otel/semconv/v1.4.0"
)
func InitTracer(serviceName, jaegerEndpoint string) error {
exporter, err := jaeger.New(
jaeger.WithAgentEndpoint(jaeger.WithAgentHost(jaegerEndpoint)),
)
if err != nil {
return err
}
tp := sdktrace.NewTracerProvider(
sdktrace.WithBatcher(exporter),
sdktrace.WithResource(
resource.NewWithAttributes(
semconv.SchemaURL,
semconv.ServiceNameKey.String(serviceName),
),
),
)
otel.SetTracerProvider(tp)
return nil
}
// gRPCインターセプター
func UnaryServerInterceptor() grpc.UnaryServerInterceptor {
return func(
ctx context.Context,
req interface{},
info *grpc.UnaryServerInfo,
handler grpc.UnaryHandler,
) (interface{}, error) {
tracer := otel.Tracer("grpc-server")
ctx, span := tracer.Start(ctx, info.FullMethod)
defer span.End()
// メタデータから相関IDを取得
md, ok := metadata.FromIncomingContext(ctx)
if ok {
if correlationIDs := md.Get("x-correlation-id"); len(correlationIDs) > 0 {
span.SetAttributes(
attribute.String("correlation.id", correlationIDs[0]),
)
}
}
resp, err := handler(ctx, req)
if err != nil {
span.RecordError(err)
span.SetStatus(codes.Error, err.Error())
}
return resp, err
}
}
メトリクス収集 - Prometheus
// metrics/metrics.go
package metrics
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
)
var (
RequestsTotal = promauto.NewCounterVec(
prometheus.CounterOpts{
Name: "grpc_requests_total",
Help: "Total number of gRPC requests",
},
[]string{"service", "method", "status"},
)
RequestDuration = promauto.NewHistogramVec(
prometheus.HistogramOpts{
Name: "grpc_request_duration_seconds",
Help: "Duration of gRPC requests",
Buckets: prometheus.DefBuckets,
},
[]string{"service", "method"},
)
ActiveConnections = promauto.NewGaugeVec(
prometheus.GaugeOpts{
Name: "grpc_active_connections",
Help: "Number of active gRPC connections",
},
[]string{"service"},
)
)
// Prometheusメトリクスインターセプター
func UnaryServerInterceptor(serviceName string) grpc.UnaryServerInterceptor {
return func(
ctx context.Context,
req interface{},
info *grpc.UnaryServerInfo,
handler grpc.UnaryHandler,
) (interface{}, error) {
timer := prometheus.NewTimer(
RequestDuration.WithLabelValues(serviceName, info.FullMethod),
)
defer timer.ObserveDuration()
resp, err := handler(ctx, req)
status := "success"
if err != nil {
status = "error"
}
RequestsTotal.WithLabelValues(serviceName, info.FullMethod, status).Inc()
return resp, err
}
}
セキュリティ
mTLS認証
// security/tls.go
package security
import (
"crypto/tls"
"crypto/x509"
"io/ioutil"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials"
)
func LoadTLSCredentials(certFile, keyFile, caFile string) (credentials.TransportCredentials, error) {
// CA証明書を読み込む
caCert, err := ioutil.ReadFile(caFile)
if err != nil {
return nil, err
}
certPool := x509.NewCertPool()
if !certPool.AppendCertsFromPEM(caCert) {
return nil, fmt.Errorf("failed to add CA certificate")
}
// サーバー証明書を読み込む
serverCert, err := tls.LoadX509KeyPair(certFile, keyFile)
if err != nil {
return nil, err
}
config := &tls.Config{
Certificates: []tls.Certificate{serverCert},
ClientAuth: tls.RequireAndVerifyClientCert,
ClientCAs: certPool,
}
return credentials.NewTLS(config), nil
}
// セキュアなgRPCサーバーの起動
func StartSecureServer(address string, server *grpc.Server) error {
creds, err := LoadTLSCredentials(
"/certs/server.crt",
"/certs/server.key",
"/certs/ca.crt",
)
if err != nil {
return err
}
lis, err := net.Listen("tcp", address)
if err != nil {
return err
}
grpcServer := grpc.NewServer(
grpc.Creds(creds),
grpc.UnaryInterceptor(grpc_middleware.ChainUnaryServer(
grpc_auth.UnaryServerInterceptor(authFunc),
grpc_recovery.UnaryServerInterceptor(),
UnaryServerInterceptor(),
)),
)
return grpcServer.Serve(lis)
}
テストとCI/CD
統合テストの実装
// integration_test.go
package integration
import (
"context"
"testing"
"time"
"github.com/stretchr/testify/suite"
"github.com/testcontainers/testcontainers-go"
)
type IntegrationTestSuite struct {
suite.Suite
postgresContainer testcontainers.Container
kafkaContainer testcontainers.Container
userService *UserService
}
func (suite *IntegrationTestSuite) SetupSuite() {
ctx := context.Background()
// PostgreSQLコンテナを起動
postgresReq := testcontainers.ContainerRequest{
Image: "postgres:15",
ExposedPorts: []string{"5432/tcp"},
Env: map[string]string{
"POSTGRES_PASSWORD": "test",
"POSTGRES_DB": "testdb",
},
WaitingFor: wait.ForSQL("5432/tcp", "postgres", func(port nat.Port) string {
return fmt.Sprintf("postgres://postgres:test@localhost:%s/testdb?sslmode=disable", port.Port())
}),
}
postgres, err := testcontainers.GenericContainer(ctx, testcontainers.GenericContainerRequest{
ContainerRequest: postgresReq,
Started: true,
})
suite.Require().NoError(err)
suite.postgresContainer = postgres
// Kafkaコンテナを起動
kafkaReq := testcontainers.ContainerRequest{
Image: "confluentinc/cp-kafka:latest",
ExposedPorts: []string{"9092/tcp"},
Env: map[string]string{
"KAFKA_ZOOKEEPER_CONNECT": "zookeeper:2181",
"KAFKA_ADVERTISED_LISTENERS": "PLAINTEXT://localhost:9092",
"KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR": "1",
},
}
kafka, err := testcontainers.GenericContainer(ctx, testcontainers.GenericContainerRequest{
ContainerRequest: kafkaReq,
Started: true,
})
suite.Require().NoError(err)
suite.kafkaContainer = kafka
// サービスを初期化
dbHost, _ := postgres.Host(ctx)
dbPort, _ := postgres.MappedPort(ctx, "5432")
suite.userService = NewUserService(fmt.Sprintf(
"postgres://postgres:test@%s:%s/testdb?sslmode=disable",
dbHost, dbPort.Port(),
))
}
func (suite *IntegrationTestSuite) TestCreateUserFlow() {
ctx := context.Background()
// ユーザー作成
user, err := suite.userService.CreateUser(ctx, &CreateUserRequest{
Email: "test@example.com",
Name: "Test User",
})
suite.NoError(err)
suite.NotNil(user)
// ユーザー取得
retrieved, err := suite.userService.GetUser(ctx, user.ID)
suite.NoError(err)
suite.Equal(user.Email, retrieved.Email)
// イベントが発行されたことを確認
// Kafkaからメッセージを読み取る
consumer := NewTestConsumer(suite.kafkaContainer)
message, err := consumer.ReadMessage(5 * time.Second)
suite.NoError(err)
suite.Equal("user.created", message.Topic)
}
GitHub Actions CI/CD パイプライン
# .github/workflows/microservice-ci.yml
name: Microservice CI/CD
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
jobs:
test:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:15
env:
POSTGRES_PASSWORD: test
POSTGRES_DB: testdb
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
- 5432:5432
steps:
- uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: '1.21'
- name: Cache Go modules
uses: actions/cache@v3
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Run tests
run: |
go test -v -race -coverprofile=coverage.out ./...
go tool cover -html=coverage.out -o coverage.html
- name: Upload coverage
uses: codecov/codecov-action@v3
with:
file: ./coverage.out
- name: Run linter
uses: golangci/golangci-lint-action@v3
with:
version: latest
build:
needs: test
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
steps:
- uses: actions/checkout@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to Container Registry
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push Docker images
uses: docker/build-push-action@v4
with:
context: .
push: true
tags: |
ghcr.io/${{ github.repository }}/user-service:latest
ghcr.io/${{ github.repository }}/user-service:${{ github.sha }}
cache-from: type=gha
cache-to: type=gha,mode=max
deploy:
needs: build
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
steps:
- uses: actions/checkout@v3
- name: Install kubectl
uses: azure/setup-kubectl@v3
- name: Configure kubectl
run: |
echo "${{ secrets.KUBE_CONFIG }}" | base64 -d > kubeconfig
export KUBECONFIG=kubeconfig
- name: Deploy to Kubernetes
run: |
kubectl apply -f k8s/
kubectl set image deployment/user-service user-service=ghcr.io/${{ github.repository }}/user-service:${{ github.sha }} -n microservices
kubectl rollout status deployment/user-service -n microservices
まとめ
マイクロサービスアーキテクチャの成功には、以下の要素が重要です:
- ドメイン駆動設計 - 適切なサービス境界の定義
- 非同期通信 - イベント駆動アーキテクチャの活用
- データ管理 - 分散データの一貫性確保
- 運用性 - 監視、ロギング、トレーシングの実装
- セキュリティ - ゼロトラストネットワークの構築
- 自動化 - CI/CDパイプラインによる継続的デリバリー
これらの原則とベストプラクティスを適用することで、スケーラブルで保守性の高いマイクロサービスシステムを構築できます。