Skip to main content
  1. Posts/

Securing Cloud AI Infrastructure

Ender
Author
Ender
Cybersecurity pro by day, gamer and storyteller by night. I write about breaking systems, exploring worlds, and the tech that powers it all.
Table of Contents
AI Security - This article is part of a series.
Part : This Article

The AI Security Gap
#

13% of organizations have already experienced breaches of AI models or applications, and 97% of them lacked proper access controls. Here’s how to not be one of them.

In Part 1 of this series, we covered the threat landscape – the OWASP LLM Top 10, prompt injection, jailbreaking, and why AI security is fundamentally different from traditional application security. That post was about understanding what can go wrong.

This post is about infrastructure. The pipes. The plumbing. The IAM policies, VPC configurations, encryption settings, and logging that determine whether your cloud AI deployment is a fortress or a screen door.

Here’s the uncomfortable reality: only 37% of organizations have policies to manage AI or detect shadow AI. The other 63% are doing it live, figuring out security after the model is already processing production data. We saw the same pattern with cloud adoption a decade ago. The difference is that AI workloads process more sensitive data, have broader access to internal systems, and introduce attack vectors that didn’t exist six months ago.

Let’s fix that.

Quick Glossary
#

If you’re coming into this without deep cloud experience, here are the terms that matter for this post:

TermWhat It Means
VPCVirtual Private Cloud – your isolated network within a cloud provider
VPC Endpoint / PrivateLinkA way to access cloud services without traffic leaving the provider’s network (no public internet)
VPC Service ControlsGoogle’s perimeter security – creates a boundary around resources to prevent data exfiltration
IAMIdentity and Access Management – who can do what to which resources
KMS / CMK / CMEKKey Management Service / Customer Managed Key / Customer Managed Encryption Key – you control the encryption keys instead of the provider
Service Principal / Managed IdentityAn identity for your application (not a human) to authenticate with cloud services
CloudTrail / Activity Log / Audit LogLogs of who did what in your cloud account
Data Plane vs Control PlaneControl plane = managing resources (create, delete, configure). Data plane = using resources (sending prompts, getting responses)

The Cloud AI Security Problem
#

Every major cloud provider now offers managed AI services. AWS has Bedrock. Azure has OpenAI Service. Google has Vertex AI. They handle the infrastructure – the GPUs, the model weights, the inference optimization – and you bring the prompts.

This is genuinely good. Managed services mean you don’t have to worry about patching CUDA drivers or scaling GPU clusters. But managed services also mean you’re trusting the provider’s defaults. And the defaults are not good enough.

The pattern across all three providers is the same:

  • Networking defaults to public endpoints
  • Logging of actual prompts and responses is disabled by default
  • IAM policies start overly permissive
  • Encryption uses provider-managed keys (adequate for compliance, insufficient for control)

Each of these defaults makes sense from a “get started quickly” perspective. None of them make sense from a security perspective. Let’s go provider by provider.

AWS Bedrock Security
#

Amazon Bedrock is AWS’s managed service for foundation models – Claude, Llama, Mistral, Titan, and others. You get API access to these models without managing infrastructure.

Here’s the security stack:

ControlDefaultRecommendedWhy
NetworkPublic endpointVPC endpoints + PrivateLinkKeeps traffic off the public internet
IAMBroad bedrock:*Fine-grained resource policiesPrinciple of least privilege
EncryptionAWS-managed KMSCustomer-managed CMKYou control key rotation and access
LoggingCloudTrail (control plane only)CloudTrail + Model Invocation LoggingCapture actual prompts and responses

Network: VPC Endpoints
#

By default, when your application calls Bedrock, traffic routes over the public internet. Even though it’s TLS-encrypted, it’s still traversing public infrastructure. For most production workloads, that’s unnecessary risk.

The fix is VPC endpoints with PrivateLink. This creates a private connection between your VPC and Bedrock – traffic never leaves the AWS network.

You need two endpoints:

# Bedrock API (control plane - managing models, guardrails)
com.amazonaws.<region>.bedrock

# Bedrock Runtime (data plane - actual inference calls)
com.amazonaws.<region>.bedrock-runtime

Pair this with a VPC endpoint policy that restricts which principals can invoke which models:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowSpecificModels",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::123456789012:role/BedrockAppRole"
      },
      "Action": "bedrock:InvokeModel",
      "Resource": [
        "arn:aws:bedrock:us-east-1::foundation-model/anthropic.claude-3-5-sonnet*",
        "arn:aws:bedrock:us-east-1::foundation-model/anthropic.claude-3-5-haiku*"
      ]
    }
  ]
}

This does two things: it keeps your traffic private, and it limits which models can be invoked through this endpoint. If someone compromises an application that only needs Claude Haiku, they can’t pivot to running expensive Titan Image Generator calls on your bill.

IAM: Fine-Grained Access
#

The most common Bedrock IAM mistake I see is this:

{
  "Effect": "Allow",
  "Action": "bedrock:*",
  "Resource": "*"
}

Don’t do this. bedrock:* includes bedrock:DeleteGuardrail, bedrock:DeleteCustomModel, bedrock:CreateModelInvocationJob (batch inference that can run up a massive bill), and every other destructive or expensive action.

Here’s what a properly scoped policy looks like for an application that needs to invoke models and use guardrails:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "InvokeModels",
      "Effect": "Allow",
      "Action": [
        "bedrock:InvokeModel",
        "bedrock:InvokeModelWithResponseStream"
      ],
      "Resource": "arn:aws:bedrock:us-east-1::foundation-model/anthropic.claude-*"
    },
    {
      "Sid": "UseGuardrails",
      "Effect": "Allow",
      "Action": "bedrock:ApplyGuardrail",
      "Resource": "arn:aws:bedrock:us-east-1:123456789012:guardrail/*"
    }
  ]
}

Specific actions. Specific resources. No wildcards on actions.

For organizations with multiple teams sharing an AWS account (which is common even though it shouldn’t be), use IAM condition keys to restrict access based on tags or specific resource ARNs. Better yet, use separate accounts per team with AWS Organizations.

Encryption: Customer-Managed Keys
#

Bedrock encrypts data at rest with AWS-managed KMS keys by default. This satisfies most compliance checkboxes, but it means AWS controls the key lifecycle. If you need to:

  • Control key rotation schedules
  • Revoke access to encrypted data instantly
  • Meet specific regulatory requirements around key management

Then switch to a customer-managed CMK. This applies to:

  • Custom model weights (fine-tuned models)
  • Training data
  • Model invocation logs
  • Guardrail configurations

The trade-off is operational complexity. You’re now responsible for key management. For most organizations, the default AWS-managed keys are fine. For regulated industries (healthcare, finance, government), CMKs are table stakes.

Logging: The Critical Default You Need to Change
#

This is the big one. Bedrock model invocation logging is disabled by default.

Let me say that again: by default, AWS does not log what you send to or receive from AI models. CloudTrail captures control plane events (who created a guardrail, who modified a model), but not data plane events (what prompts were sent, what the model responded with).

This means if an attacker compromises your application and uses it to exfiltrate data through the AI model, or if an employee sends sensitive data to the model, you have no record of it.

To enable it, go to the Bedrock console, navigate to Settings, and turn on Model Invocation Logging. You can send logs to S3, CloudWatch, or both. I recommend both – S3 for long-term storage and analysis, CloudWatch for real-time alerting.

Once enabled, every invocation is logged with:

  • The full prompt (input)
  • The full response (output)
  • Token usage
  • Model ID
  • Timestamp
  • The IAM principal that made the call

Privacy consideration: You’re now logging every prompt. If your users are sending PII through your AI application, those logs contain PII. Plan your log retention, access controls, and data handling accordingly. Consider using Bedrock Guardrails with PII detection to filter sensitive data before it hits the model – we’ll cover that in depth in Part 3.

Azure OpenAI Security
#

Azure OpenAI Service gives you access to OpenAI models (GPT-4o, GPT-4, o1, o3, DALL-E) through Azure’s infrastructure. The key security advantage over using OpenAI’s API directly: your data is not used for model training.

That’s worth repeating. When you use Azure OpenAI, Microsoft explicitly states your data is not used to train, retrain, or improve any Microsoft or third-party models.

The key architectural difference: with Azure OpenAI, your data never leaves Microsoft’s infrastructure and never reaches OpenAI. Azure hosts the OpenAI models independently within your Azure tenant. The OpenAI API at api.openai.com also defaults to not using your data for training (as of March 2023, with a formal Data Processing Addendum), but your data is processed on OpenAI’s own infrastructure. Both services commit contractually to not training on your data—Azure enforces this through infrastructure separation, while OpenAI enforces it through policy.

Here’s the security stack:

ControlDefaultRecommendedWhy
NetworkPublic endpointPrivate endpoints, disable public accessZero public attack surface
AuthAPI keysManaged identities (Entra ID)No secrets to rotate or leak
FilteringContent Safety enabled by defaultContent Safety + Prompt ShieldsDefense in depth
MonitoringBasic metricsDiagnostic settings + Log AnalyticsFull audit trail

Network: Private Endpoints
#

Azure OpenAI supports private endpoints, which work similarly to AWS PrivateLink – your traffic stays on the Azure backbone instead of routing over the internet.

The critical step most people miss: disable public network access after creating the private endpoint. Creating a private endpoint does not automatically disable the public one. You have to explicitly turn it off:

az cognitiveservices account update \
  --name your-openai-resource \
  --resource-group your-rg \
  --public-network-access Disabled

Without this step, you have two doors into your AI service – the private one you just built and the public one that was always there. Attackers will use whichever one you forgot about.

Auth: Kill Your API Keys
#

Azure OpenAI supports two authentication methods: API keys and Microsoft Entra ID (formerly Azure Active Directory) managed identities. Use managed identities. Stop using API keys.

API keys are:

  • Static secrets that get committed to git repos
  • Shared across team members via Slack
  • Embedded in configuration files that get copied to dev environments
  • Impossible to audit at the individual user level

Managed identities are:

  • Automatically rotated
  • Tied to specific Azure resources (not humans)
  • Auditable through Entra ID logs
  • Revocable instantly without redeploying anything
# Assign the "Cognitive Services OpenAI User" role to your app's managed identity
az role assignment create \
  --role "Cognitive Services OpenAI User" \
  --assignee <managed-identity-object-id> \
  --scope /subscriptions/<sub>/resourceGroups/<rg>/providers/Microsoft.CognitiveServices/accounts/<resource>

The Cognitive Services OpenAI User role grants inference access only. If you’re using a Cognitive Services Contributor or Owner role for your application, you’re granting it the ability to delete the entire resource. Don’t.

Content Filtering: Enabled by Default (Good), But Incomplete
#

Azure OpenAI ships with content filtering enabled by default. This is a genuine differentiator from other providers. Out of the box, you get filtering for:

  • Hate and fairness content
  • Sexual content
  • Violence
  • Self-harm
  • Prompt attacks (jailbreak and indirect injection detection)

The severity levels are configurable (Low, Medium, High), and you can create custom content filtering configurations for different deployments.

What you get: Prompt Shields are now enabled by default alongside content filtering, providing real-time detection of jailbreak attempts and indirect prompt injection. Advanced features like Spotlighting (which distinguishes trusted vs untrusted inputs) require explicit configuration. We’ll cover the full setup in Part 3 of this series.

Data Residency
#

Azure OpenAI processes data in the region where you deploy it. If you deploy in East US, your prompts and responses stay in the East US data center. This matters for GDPR, data sovereignty, and regulatory compliance.

However, the abuse monitoring pipeline may process content outside your region for safety purposes. If you’re subject to strict data residency requirements, review Microsoft’s current data processing documentation and consider requesting an exemption from abuse monitoring (available for approved use cases).

GCP Vertex AI Security
#

Vertex AI is Google’s unified AI platform. It provides access to Gemini models, PaLM, and third-party models, plus infrastructure for training, tuning, and deploying custom models.

Google’s approach to security is characteristically opinionated – VPC Service Controls are more restrictive by default than what you get with AWS or Azure, which is a good thing.

ControlDefaultRecommendedWhy
NetworkPublic endpointVPC Service ControlsBlocks all access from outside the perimeter
IAMProject-level rolesService accounts + fine-grained rolesSeparate identities per workload
EncryptionGoogle-managed keysCMEK (Customer Managed Encryption Keys)You control key lifecycle
LoggingAdmin Activity logs (always on)Admin + Data Access audit logsFull prompt/response trail

Network: VPC Service Controls
#

GCP’s VPC Service Controls work differently from AWS PrivateLink or Azure Private Endpoints. Instead of creating a private connection to a service, you create a perimeter around your resources. Anything outside that perimeter – including other GCP projects, the public internet, and even Google’s own services – is blocked unless explicitly allowed.

This is powerful. A VPC Service Controls perimeter around your Vertex AI resources means:

  • No data can leave the perimeter (exfiltration protection)
  • No unauthorized projects can access your models
  • API calls from outside the perimeter are denied
# Create an access policy
gcloud access-context-manager policies create \
  --organization=<org-id> \
  --title="AI Workload Policy"

# Create a service perimeter
gcloud access-context-manager perimeters create ai-perimeter \
  --policy=<policy-id> \
  --title="AI Workload Perimeter" \
  --resources="projects/<project-number>" \
  --restricted-services="aiplatform.googleapis.com" \
  --access-levels="<access-level>"

The trade-off: VPC Service Controls can be complex to configure, and overly restrictive perimeters can break legitimate workflows. Start with a dry-run perimeter that logs violations without blocking them, then tighten.

IAM: Service Accounts Done Right
#

GCP’s IAM for Vertex AI uses predefined roles that are more granular than what you typically see in AWS or Azure:

RoleWhat It Grants
roles/aiplatform.userInvoke predictions, manage own resources
roles/aiplatform.viewerRead-only access to all AI resources
roles/aiplatform.adminFull control (don’t use this for applications)
roles/aiplatform.customCodeServiceAgentFor training jobs that need to run custom code

The rule: one service account per workload. Don’t share service accounts across applications. Don’t use user credentials for service-to-service calls. Don’t grant aiplatform.admin to anything that doesn’t need to create or delete resources.

For multi-team environments, combine IAM with resource-level permissions so teams can only access their own endpoints, datasets, and models.

Encryption: CMEK
#

Vertex AI supports Customer Managed Encryption Keys (CMEK) through Cloud KMS. CMEK applies to:

  • Training data
  • Model artifacts
  • Prediction logs
  • Pipeline metadata

Like AWS, the default is Google-managed encryption. CMEK gives you control over key rotation, access policies, and the ability to destroy keys (which renders the encrypted data permanently unrecoverable – so be careful).

Logging: Two Tiers
#

GCP has two types of audit logs for Vertex AI:

Admin Activity logs are always on and cannot be disabled. These capture control plane operations – creating endpoints, deploying models, modifying configurations. They’re free and retained for 400 days.

Data Access audit logs capture data plane operations – the actual prediction requests and responses. These are disabled by default and need to be explicitly enabled. They count against your Cloud Logging quota and can get expensive at scale.

# Enable Data Access audit logs for Vertex AI
gcloud projects get-iam-policy <project-id> --format=json > policy.json
# Add aiplatform.googleapis.com to the auditLogConfigs section
# Then apply:
gcloud projects set-iam-policy <project-id> policy.json

The same privacy consideration applies here as with AWS: once you enable Data Access logging, you’re capturing prompts and responses. Plan accordingly.

Common Misconfigurations (The Ones That Will Get You Breached)
#

After going through three providers, the patterns are clear. These are the misconfigurations that show up in every cloud AI security audit:

1. Public Endpoints Without Network Restrictions
#

The mistake: Deploying a cloud AI service and leaving the public endpoint wide open.

Why it happens: It’s the default. The service works immediately. Nobody gets paged at 2 AM because they forgot to create a VPC endpoint.

The risk: Any authenticated request from anywhere on the internet can reach your AI service. If an API key leaks (and API keys leak constantly – GitGuardian found 12.8 million new secrets exposed in public GitHub repos in 2023 alone), the attacker can invoke your models from anywhere.

The fix: Private endpoints (AWS/Azure) or VPC Service Controls (GCP). At minimum, use IP allowlisting to restrict access to known CIDR ranges.

2. Disabled Data Plane Logging
#

The mistake: Not enabling model invocation logging / data access logging.

Why it happens: It’s opt-in on every provider. AWS, Azure, and GCP all disable data plane logging for AI services by default. There are good reasons for this (cost, privacy, volume), but the result is a blind spot.

The risk: You cannot investigate AI-related security incidents. If an attacker uses your AI service to process stolen data, or if an employee sends customer PII to the model, you have no forensic evidence.

The fix: Enable data plane logging. Accept the cost. Implement log retention policies and access controls to handle the privacy implications.

ProviderWhat to EnableWhere
AWSModel Invocation LoggingBedrock Console > Settings
AzureDiagnostic SettingsOpenAI Resource > Monitoring > Diagnostic Settings
GCPData Access Audit LogsIAM & Admin > Audit Logs

3. Overly Permissive IAM
#

The mistake: Granting bedrock:*, Cognitive Services Contributor, or aiplatform.admin to application workloads.

Why it happens: It’s faster. The application works. Nobody has time to figure out the minimum required permissions.

The risk: A compromised application can delete models, modify guardrails, exfiltrate training data, create expensive batch jobs, or disable security controls. The blast radius is everything instead of nothing.

The fix: Start with zero permissions. Add the specific actions your application needs. Test. Repeat. This is tedious. It also works.

4. API Keys Instead of Identity-Based Auth
#

The mistake: Using static API keys for service-to-service authentication.

Why it happens: API keys are simple. Copy the key, paste it into an environment variable, done.

The risk: Keys get committed to repos. Keys get shared in chat. Keys get copied to development environments that have weaker security. Keys cannot be scoped to specific IP addresses or time windows (on most platforms). When a key leaks, you don’t know who used it because keys aren’t tied to individual identities.

The fix:

  • AWS: Use IAM roles and instance profiles. No keys needed for EC2, Lambda, ECS.
  • Azure: Use managed identities. No keys needed for App Service, Functions, AKS.
  • GCP: Use service accounts with Workload Identity. No keys needed for GKE, Cloud Run, Cloud Functions.

5. Missing Content Filtering
#

The mistake: Not configuring content filtering or guardrails, or turning them off because they “interfere with the application.”

Why it happens: Content filters block some legitimate use cases. Developers turn them off during testing and forget to turn them back on.

The risk: Your AI service becomes an unfiltered pipeline for generating harmful content, leaking system prompts, or executing prompt injection attacks. If you covered Part 1 of this series, you know how effective jailbreak attacks are against unguarded models.

The fix: Every provider has content filtering capabilities. Use them.

We’ll do a deep dive on all of these in Part 3.

Cross-Provider Comparison
#

Here’s the side-by-side comparison if you’re evaluating providers or running multi-cloud:

CapabilityAWS BedrockAzure OpenAIGCP Vertex AI
Private networkingVPC endpoints + PrivateLinkPrivate endpointsVPC Service Controls
Identity authIAM rolesManaged identities (Entra ID)Service accounts + Workload Identity
Default content filteringOpt-in (Guardrails)Enabled by defaultConfigurable safety filters
Prompt/response loggingOpt-in (Model Invocation Logging)Opt-in (Diagnostic Settings)Opt-in (Data Access Audit Logs)
Encryption at restKMS (default), CMK (optional)Microsoft-managed (default), CMK (optional)Google-managed (default), CMEK (optional)
Data training opt-outData not used for trainingData not used for trainingData not used for training
Multi-model supportClaude, Llama, Mistral, Titan, othersGPT-4o, GPT-4, o1, o3, DALL-EGemini, PaLM, third-party

One thing all three have in common: your data is not used to train their models. This is different from using the model providers’ APIs directly (OpenAI, Anthropic, etc.), where data handling policies vary. If data privacy is a primary concern, cloud-managed services give you a contractual guarantee.

Compliance Timeline
#

Security doesn’t exist in a vacuum. Regulatory frameworks are catching up to AI, and the deadlines are real.

EU AI Act
#

The EU AI Act is the most comprehensive AI regulation globally. Here’s what matters:

DateMilestone
February 2025Prohibited AI practices banned (social scoring, real-time biometric surveillance)
August 2025Transparency requirements for general-purpose AI models
August 2026Full enforcement for high-risk AI systems

If your AI system classifies as “high-risk” (employment decisions, credit scoring, law enforcement, critical infrastructure), you need:

  • Risk management systems
  • Data governance documentation
  • Technical documentation and logging
  • Human oversight mechanisms
  • Accuracy, robustness, and cybersecurity measures

The infrastructure decisions we’ve discussed – logging, access controls, encryption – are foundational to meeting these requirements. You can’t demonstrate compliance without audit trails.

SOC 2 + AI
#

SOC 2 audits now include AI-specific criteria. If your organization undergoes SOC 2 examinations, expect auditors to ask about:

  • How AI models are accessed and authenticated
  • What data AI systems can access
  • How AI outputs are monitored and logged
  • What controls prevent unauthorized AI usage (shadow AI)
  • How AI-specific risks are assessed and managed

The good news: if you implement the controls we’ve covered (private networking, fine-grained IAM, comprehensive logging, encryption), you’re most of the way there.

ISO 42001
#

ISO 42001 is the international standard for AI Management Systems, published in December 2023. It provides a framework for organizations to manage AI risks, similar to how ISO 27001 works for information security.

Adoption is accelerating. If your organization already has ISO 27001 certification, ISO 42001 builds on that foundation. If not, it’s worth putting on the roadmap.

What To Do Now
#

Here’s the action plan, broken by timeframe. Same structure as Part 1, because consistency matters.

Today (30 minutes)
#

Audit your cloud AI network configuration. For each cloud AI service you use:

# AWS: Check if Bedrock has VPC endpoints configured
aws ec2 describe-vpc-endpoints \
  --filters "Name=service-name,Values=com.amazonaws.*.bedrock-runtime" \
  --query "VpcEndpoints[].{ID:VpcEndpointId,State:State}"

# Azure: Check if public access is disabled
az cognitiveservices account show \
  --name <resource-name> \
  --resource-group <rg> \
  --query "properties.publicNetworkAccess"

# GCP: Check if VPC Service Controls perimeter exists
gcloud access-context-manager perimeters list --policy=<policy-id>

If any service is publicly accessible and doesn’t need to be, fix it.

This Week
#

Enable data plane logging. On every provider. For every AI service. Accept the cost. You need the audit trail.

Review IAM policies. Search for:

  • bedrock:* in AWS
  • Cognitive Services Contributor or Owner in Azure
  • aiplatform.admin in GCP

Replace them with the minimum permissions your applications actually need.

Kill API keys. Switch to IAM roles (AWS), managed identities (Azure), or service accounts with Workload Identity (GCP). This is the single highest-impact change you can make.

This Month
#

Implement content filtering. At minimum, enable the default guardrails each provider offers. Plan for a deeper implementation (covered in Part 3).

Document your AI inventory. For compliance purposes, you need a record of:

  • What AI services you’re running
  • What data they can access
  • What controls are in place
  • Who is responsible for each service

Start a compliance gap analysis. If EU AI Act, SOC 2, or ISO 42001 applies to you, map your current controls against the requirements. The infrastructure decisions you make now determine how painful compliance will be later.

What’s Next
#

This post covered the infrastructure layer – the networking, IAM, logging, and encryption that form the foundation of cloud AI security. In the remaining posts:

  • Part 1: AI Security Fundamentals – The threat landscape, OWASP LLM Top 10, and why AI security is different (published)
  • Part 3: AI Guardrails and User-Facing Security – Configuring Bedrock Guardrails, Azure Prompt Shields, Anthropic’s Constitutional Classifiers, and OpenAI’s Moderation API. The content filtering deep dive.
  • Part 4: Securing Local AI Installations – Hardening Ollama, llama.cpp, and vLLM. Network exposure, model supply chain security, and container isolation.

Infrastructure gets you 80% of the way there. The remaining 20% is guardrails, monitoring, and operational discipline – which is exactly what Parts 3 and 4 will cover.


Further Reading
#

Provider Documentation:

Frameworks & Standards:

Reports & Research:

Series Navigation:

  • Part 1: AI Security Fundamentals
  • Part 2: Securing Cloud AI Infrastructure (you are here)
  • Part 3: AI Guardrails and User-Facing Security (coming soon)
  • Part 4: Securing Local AI Installations (coming soon)
AI Security - This article is part of a series.
Part : This Article

Related

Words of Radiance, an Instant Top 10 Epic Fantasy

The long anticipated sequel and book 2 of the Stormlight Archive hit stores march 4th. It picks up right were we left off, the assassin in white has decapitated the leadership of Roshar and targets Dalinar Kholin the blackthorn and true power behind the Alethi kingship. While the first book of any epic fantasy series needs to build the world, the second book needs to make us care about the characters. If you have read fantasy for long enough nearly everyone reads “this big book that goes nowhere.” The second book in the series needs to be paced particularly well to keep the reader engaged and show significant character development. Words of Radiance has it in spades.

7-Science Fiction and Fantasy Novels for your 2014 reading list.

Words of Radiance, March 4th # Book 2 of the Stormlight Archive. the highly anticipated sequel to The Way of Kings is at the top of my personal list. Bringing us back to the world filled with Spren as the war against the Parshendi escaltes on the shattered plains, and with the assassin in white gutting nobility and power all over the world is now targeting Dalinar the arguably the real power behind the Alethi throne. Kaladin, who has sworn to protect Dalinar and the king, is struggling to master his new windrunner powers while keeping them secret, has also been elevated from a branded slave to the new royal guard commander. For more information on this series check out our review here.