FIELD NOTES
Blog
AWS, infrastructure, incident response, and production systems. Written for engineers and founders who want the real playbooks, not fluff.
Fortinet shipped a critical FortiClientEMS fix, and it is the kind of bug attackers love
Fortinet shipped a critical FortiClientEMS fix, and it is the kind of bug attackers love Fortinet just patched a critical SQL injection vulnerability in FortiClient Endpoint Management Server (FortiClientEMS) that can let an unauthenticated attacker execute unauthorized code or commands through crafted HTTP requests. The CVE is CVE-2026-21643. If that sounds abstract, here is the real-world translation. FortiClientEMS is a control plane for endpoints. When the control plane is compromised, “one server got popped” can quickly become “every managed machine is now at risk”. And this news lands right after another Fortinet issue, CVE-2026-24858, where Fortinet and CISA both describe active exploitation tied to FortiCloud SSO admin login paths. So yes, the vibes are bad. --- What exactly got fixed in FortiClientEMS CVE-2026-21643 is an SQL injection bug, meaning user-controlled input can be interpreted as part of a database query. In this case, it is reachable via HTTP requests and described as allowing an attacker to execute unauthorized code or commands without authentication. Affected and fixed versions Affected: FortiClientEMS 7.4.4 Fix: upgrade to 7.4.5 or later Not affected: FortiClientEMS 7.2 and 8.0 (as stated in the PSIRT advisory) Severity score, and why people are quoting different numbers Some reporting cites 9.1, but the NVD entry includes a 9.8 CVSS v3.1 vector from Fortinet as the CNA (scoring authority here). Treat it as “critical either way” because it is network reachable, no auth, high impact. Is it exploited in the wild Fortinet has not publicly said this one is exploited. That does not mean it is safe. It means you still have a patch window before it turns into a mass-scanning festival. --- Hacker perspective: why this is spicy Attackers are obsessed with “systems that manage other systems”. FortiClientEMS is literally that. A normal endpoint compromise is one device. A management-plane compromise is leverage. From an attacker’s perspective, the dream is simple. Find a management interface that is reachable. Gain a foothold once. Use the platform’s legitimate privileges to push changes at scale. SQL injection is particularly nasty because it often starts as “database manipulation” and ends as “application behavior control”, and Fortinet’s own description explicitly warns about code or command execution outcomes. So even if you think your endpoints are hardened, that does not help if the thing telling them what policy to enforce gets owned. --- The other Fortinet problem that actually is being exploited Now for the second part of the story, because it changes the risk mood. CVE-2026-24858 is an authentication bypass involving FortiCloud SSO. The condition is important. If FortiCloud SSO authentication is enabled on a device, an attacker with a FortiCloud account and a registered device may be able to log into devices registered to other accounts. CISA published an alert about ongoing exploitation, and the CVE appears in CISA’s Known Exploited Vulnerabilities (KEV) catalog. That is the government version of “people are getting hit, patch now”. Fortinet’s own guidance and analysis describe attacker behavior consistent with real compromise workflows, including creating local admin accounts for persistence, making config changes that can grant access, and exfiltrating firewall configurations. --- What clients should do, like today This is the pragmatic part. No theatre. No vendor drama. Just actions. If you run FortiClientEMS 1. Upgrade FortiClientEMS 7.4.4 to 7.4.5+ immediately. 2. Lock down exposure. Treat EMS like a crown-jewel admin surface. Put it behind a VPN, allowlist, and management network segmentation. 3. Watch the EMS host for abnormal process launches and suspicious web requests patterns. Even without a published exploitation statement, high-severity bugs get targeted fast. If you use FortiCloud SSO admin login in Fortinet products 1. Check whether FortiCloud SSO admin login is enabled, and disable it if you do not absolutely need it. 2. Hunt for persistence. Look for unexpected local admins and admin-level config changes around the exploitation timeframe described by Fortinet and CISA. 3. Assume config sensitivity. If configs were accessed or exported, treat that as a serious data exposure because it can reveal network topology, VPN settings, and security rules. --- The bigger lesson This is not just “Fortinet had a bug”. Every vendor has bugs. The real point is that management plane and identity paths are where single failures become ecosystem failures. If I were a defender reading this, I would take away one rule. If it can manage your fleet or your admin login, patch it like it can take down your company. Because it can.
Read more →ECS Task Role AccessDenied is always fixable if you stop blaming AWS
You know the vibe: the container is healthy, the service is green, the app starts… and then your logs say: AccessDeniedException: User is not authorized to perform s3:GetObject At that point, most teams do the classic panic dance. They slap AmazonS3FullAccess on some role, redeploy, and pray. Sometimes it “works” and sometimes it still fails, which is even worse because now it feels random. It isn’t random. It’s usually one of three things: You gave permissions to the wrong role The right role exists, but the task isn’t using it The role is right, but you’re missing a second permission edge (KMS, resource policies, STS, VPC endpoints) This post is the production-grade runbook for debugging it without guessing. The mental model that stops the pain In ECS you have two IAM roles that people constantly mix up: Task execution role This is what ECS needs to start your task: pulling images, writing logs, fetching secrets at startup, etc. AWS calls it the “task execution IAM role.” Task role This is what your application code uses once it’s running, when it calls AWS APIs via an SDK (S3, DynamoDB, SQS, Secrets Manager, you name it). AWS calls it the “task IAM role.” If your app is throwing AccessDenied while calling AWS APIs, the permissions almost always belong on the task role, not the execution role. Why this works cleanly: ECS delivers the role credentials to the container via a standard container credentials flow (SDK reads an env var like AWS_CONTAINER_CREDENTIALS_RELATIVE_URI and fetches temporary creds). The fastest way to see what identity your container is actually using You want to stop arguing about what role “should” be used and instead prove what role is used. Inside the container (or via ECS Exec), run: aws sts get-caller-identity If you don’t have AWS CLI in the image, do the same with your SDK (print the caller identity once on startup), or temporarily add a minimal debug endpoint that calls STS and returns the ARN. If you see an ARN you didn’t expect, the problem is upstream: task definition, role attachment, trust policy, or your SDK credential chain. The classic failure mode You added permissions to the execution role, redeployed, still got AccessDenied. That is completely consistent with how ECS is designed. The execution role is for ECS “plumbing,” the task role is for your app runtime calls. So the real question becomes: Does your task definition actually set a task role? In the task definition JSON, you want both (often): executionRoleArn taskRoleArn Example skeleton: { "family": "my-service", "networkMode": "awsvpc", "requiresCompatibilities": ["FARGATE"], "cpu": "512", "memory": "1024", "executionRoleArn": "arn:aws:iam::123456789012:role/ecsTaskExecutionRole", "taskRoleArn": "arn:aws:iam::123456789012:role/myServiceTaskRole", "containerDefinitions": [ { "name": "api", "image": "123456789012.dkr.ecr.eu-west-2.amazonaws.com/my-api:latest", "essential": true, "logConfiguration": { "logDriver": "awslogs", "options": { "awslogs-group": "/ecs/my-service", "awslogs-region": "eu-west-2", "awslogs-stream-prefix": "ecs" } } } ] } If taskRoleArn is missing, your app will not have the intended permissions. Trust policy checks that waste hours if you forget them Even if you set taskRoleArn, ECS can only assume it if the role trust policy allows the ECS tasks service principal. Your task role trust policy should look like this (the important part is the principal): { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "ecs-tasks.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } If this is wrong, you’ll see “access denied” style failures that look like permissions but are actually “role cannot be assumed.” AWS has a solid write-up on ECS role best practices and why task roles are the right isolation boundary. The boring but correct way to write the policy Let’s say your app needs to read from one S3 bucket prefix, and nothing else. Do this (least privilege), not AmazonS3FullAccess: { "Version": "2012-10-17", "Statement": [ { "Sid": "ReadArtifacts", "Effect": "Allow", "Action": ["s3:GetObject"], "Resource": ["arn:aws:s3:::my-bucket/private/artifacts/*"] }, { "Sid": "ListPrefix", "Effect": "Allow", "Action": ["s3:ListBucket"], "Resource": ["arn:aws:s3:::my-bucket"], "Condition": { "StringLike": { "s3:prefix": ["private/artifacts/*"] } } } ] } If you only grant GetObject and forget ListBucket, you’ll get weird “works sometimes” behaviour depending on whether your code lists before reading. The hard mode gotchas that make people think ECS is cursed These are the ones that bite experienced engineers because they are not obvious. 1) SSE-KMS encrypted S3 objects Your role can have S3 permissions and still fail because the object is encrypted with KMS and you don’t have kms:Decrypt for the key. Symptoms: S3 calls fail even though policy “looks right.” Fix: Add KMS permissions on the key (and check key policy too). 2) Bucket policy or resource policy overrides you IAM allow does not automatically win if the bucket policy denies, or only allows a different principal. Same for Secrets Manager resource policies. Symptoms: You swear the role has permissions, but AccessDenied persists. Fix: Inspect the resource policy and make sure it allows your task role ARN. 3) Your SDK is not using ECS container credentials ECS injects a container credentials endpoint for the SDK to use. If your app is overriding credentials (env vars, shared config, a hardcoded profile), it might ignore the task role entirely. AWS documents how container credential providers work and what variables SDKs use. Symptoms: get-caller-identity shows an unexpected principal. Fix: remove overrides, ensure the SDK is allowed to use default credential resolution, and verify the ECS-provided env var exists. 4) You’re debugging the execution role instead of the task role This happens a lot when the logs are fine (execution role works), but runtime calls fail (task role missing/wrong). The roles have different jobs. The no-guesswork triage flow I use in real systems Confirm the failing AWS action and resource from the error text From inside the container, run aws sts get-caller-identity (or SDK equivalent) Confirm the task definition has taskRoleArn set Confirm the task role trust policy allows ecs-tasks.amazonaws.com Confirm the IAM policy includes the exact action and the correct resource ARN Check resource policies (S3 bucket policy, KMS key policy, Secrets Manager policy) If still stuck, use AWS’s ECS IAM role config troubleshooting guide as a structured checklist This is the difference between a senior engineer and a chaos goblin: you turn “it’s broken” into a deterministic elimination process. Make it visual on your blog If you want this post to slap, add one diagram. Either draw it in Figma, or recreate it cleanly. Diagram idea 1 Two lanes: ECS agent lane: “Pull image, send logs, fetch secrets at startup” → Execution role Application lane: “Call S3, DynamoDB, SQS, Secrets at runtime” → Task role Reference AWS docs in the caption so it looks legit. Diagram idea 2 Credentials flow: Task role → STS temporary creds → ECS injects container credentials endpoint → SDK reads env var → API call succeeds Copy-ready ending checklist If your ECS task is throwing AccessDenied, check this in order: Task definition has taskRoleArn set (not just execution role) Task role trust policy allows ECS tasks (ecs-tasks.amazonaws.com) Your container is actually using that role (STS caller identity) Policy matches the exact action + resource ARN Resource policies and KMS aren’t silently blocking you SDK is not overridden away from container credentials
Read more →Introducing SnappyCart – A Plug-and-Play React Cart Component for Modern Web Apps
Very excited to introduce SnappyCart - a free, open source React cart component that makes it incredibly easy to drop a shopping cart into your app. Whether you're building a personal project, a prototype, or scaling an eCommerce product, SnappyCart gives you a clean and extendable starting point. Why SnappyCart? Most cart implementations are either too bloated or too rigid. SnappyCart is designed to be: Lightweight and fast Built on React context for global state Headless but comes with a default UI Fully tested (with Vitest) Dev-friendly (Prettier, ESLint, Husky, and GitHub Actions CI included) Key Features: Cart Context API: Add, remove, and update items globally CartDrawer UI: Slide-out panel with product count and controls Flexible hooks: Use useCart() in any component Unit tested: Safe to use in production apps Vite + TypeScript: Lightning-fast DX Get Started npm install snappycart import { CartProvider, CartDrawer, useCart } from 'snappycart'; <CartProvider> <CartDrawer /> </CartProvider> 🔗 Links 🛒 npm package 🧑💻 GitHub repo This is just the beginning. I'm working on a SnappyCart Pro edition with persistent cart state, analytics, and embeddable checkout integrations. And this repo will get more free features, advanced accessibility features, and more. If you're curious or want to contribute, head to the GitHub repo and drop a star ⭐️. Take a minute to check out the package and rate it. All feedback is much appreciated!
Read more →When IAM Changes Kill Production and CloudTrail Tells You Exactly Who Did It
A single IAM change can take down production in a way that looks like “the app is broken” but is actually “someone removed a permission at the worst possible time.” The scary part is not the outage. The scary part is the meeting after, where everyone asks the same question. Who did it. This is where CloudTrail turns from “security checkbox” into your most valuable debugging tool. CloudTrail records AWS activity across console, CLI, and SDKs, and it is designed to answer “who did what, when, and from where.” This post is a real-world playbook for solving the IAM-change outage fast, and then hardening your account so the same class of failure becomes rare and boring. The failure mode you see at 2am Your symptoms usually look like one of these: Your backend starts returning 500s after a deploy that “should have been safe.” ECS tasks or Lambda invocations start failing with AccessDenied. A background job silently stops writing to S3, DynamoDB, SQS, KMS, you name it. Someone “just tweaked permissions” and now half your stack is on fire. The real root cause is usually one of these IAM events: a policy detached from a role an inline policy overwritten a managed policy version changed a role trust policy updated so the service can no longer assume it That last one is extra spicy because it breaks identity at the source. The 10 minute forensic loop that makes you look terrifyingly competent You are trying to answer four questions: What changed Who changed it Where they changed it from What exactly was affected CloudTrail gives you those answers when you know what to look for. Step 1 Find the failing principal and the exact permission error Start from the error message in your app logs. You want: the AWS service being called the API action being denied the role ARN or assumed-role ARN This is your “search key” into CloudTrail. Step 2 Use CloudTrail Event history for the first pass CloudTrail Event history is enabled by default and gives you a searchable record of the past 90 days of management events in a region. It is immutable and fast for incident response. In the CloudTrail console Event history, filter around the time the outage started and search for IAM changes. The usual suspects are: DetachRolePolicy AttachRolePolicy PutRolePolicy DeleteRolePolicy CreatePolicyVersion SetDefaultPolicyVersion UpdateAssumeRolePolicy If you already know the role name, filter by resource name too. Step 3 Open the event record and read it like a crime scene report CloudTrail event records have a predictable structure, and the fields you care about are consistent across services. The high signal fields are: eventTime eventName userIdentity sourceIPAddress userAgent requestParameters responseElements errorCode and errorMessage if the change failed The single most important section is userIdentity. It tells you what kind of identity performed the action, what credentials were used, and whether the call came from an assumed role. This is where you’ll spot patterns like: a human using the console a CI role assumed via STS a break-glass role used outside normal hours a third-party integration doing something it should never do Now you have your answer for “who did it,” plus enough context to be fair about it. Step 4 Confirm blast radius in a second query Once you find the first IAM change event, widen the time window by 10 minutes and search for adjacent changes. IAM outages are often “two edits” not one. Example: someone detaches a policy, then attempts to fix it by attaching a different one, then updates the trust policy, then accidentally makes it worse. CloudTrail will show that sequence, but it will not show events in a guaranteed order inside log files, so always lean on timestamps instead of expecting a neat stack trace. When Event history is not enough and what to do instead Event history is per region and 90 days. That is perfect for most incidents, but not for audits, long-running mysteries, and multi-account org setups. Trails for retention and real monitoring CloudTrail trails can deliver events to an S3 bucket and optionally to CloudWatch Logs and EventBridge. This is how you get long-term retention and real-time detection. AWS notes CloudTrail typically delivers logs to S3 within about 5 minutes on average, which is good enough for most alerting pipelines. CloudTrail Lake for fast SQL search at scale CloudTrail Lake lets you run SQL-based queries on event data stores. It is powerful for investigations across accounts and regions, but it incurs charges, so use it intentionally. The minimum viable detective control that prevents repeats Once you have been burned by IAM once, you stop treating it as “just permissions” and start treating it like production configuration. The simplest hardening is: Route CloudTrail events into EventBridge Match on IAM change API calls Alert immediately to your incident channel EventBridge can receive AWS service events delivered via CloudTrail, including “AWS API Call via CloudTrail” events. You do not need a huge SIEM to start. You just need to know when someone touches the keys to the kingdom. The minimum viable prevention that reduces your blast radius Detective controls tell you what happened. Preventive controls make it harder for the incident to happen at all. Permission boundaries for roles that create roles Permission boundaries set the maximum permissions an IAM identity can ever get, even if someone attaches an overly broad policy later. This is a big deal for teams that want developers to move fast without letting them mint new admin. SCPs for org-wide guardrails Service control policies in AWS Organizations restrict what accounts can do. They do not grant permissions, they only limit what is possible. Founder translation: even if someone fat-fingers a policy in a single account, your org-level seatbelt can stop the worst actions from being executable. Access Analyzer to kill “we just gave it admin” culture IAM Access Analyzer can help review unused or risky access, and it can generate least-privilege policies based on CloudTrail activity. That is a practical way to replace broad permissions with what the system actually uses. Two diagrams that make this post feel premium Diagram 1 The IAM outage timeline A simple horizontal timeline with four blocks: Deployment finishes AccessDenied spikes IAM change event in CloudTrail Recovery by restoring policy or trust relationship Under the CloudTrail block, list the exact event name you found, the identity type, and the source IP. Diagram 2 The hardened control loop A loop diagram: IAM change attempt → CloudTrail record → EventBridge rule → alert → human review → remediation → policy hardening This diagram sells “operator mindset” instantly. You can build it in Figma in 10 minutes. The takeaways that matter in real teams CloudTrail is your truth layer for “who changed what,” and Event history gives you 90 days of fast answers by default. Trails are how you graduate from debugging to monitoring, because they deliver to S3 and can feed CloudWatch and EventBridge. Prevention is not one thing. It is boundaries, org guardrails, and continuous least-privilege cleanup. The best part is that all of this makes you faster, not slower. The whole point is to make outages boring and audits trivial.
Read more →OIDC Login in the Real World and How to Debug Cognito Hosted UI Without Losing Your Mind
Who this is for: engineers shipping web apps, and founders who want to understand why “login” is never “just login.” The uncomfortable truth Most “auth bugs” aren’t bugs. They’re protocol misunderstandings hiding behind a button that says “Sign in.” OIDC (OpenID Connect) is simple in theory: OAuth 2.0 handles authorization. OIDC adds identity (who the user is) via the ID token. In practice, your app is a bunch of moving parts, and Cognito is strict (as it should be). This post gives you: the exact redirect trace for Cognito Hosted UI PKCE explained in a way you can actually implement token reality: ID vs access vs refresh the pitfall list that causes 90% of production pain a security posture I’d defend in a serious review The flow in one diagram (the only mental model you need) You’re doing Authorization Code Flow with PKCE (best practice for browser apps). The user gets redirected to Cognito, logs in, Cognito redirects back with a short-lived code, and your app exchanges that code for tokens at the token endpoint. Sequence diagram (Figma spec) Create 4 vertical swimlanes: Browser, Your App (Frontend), Cognito Hosted UI, Your API. Browser → Cognito: GET /oauth2/authorize (with code_challenge, state, nonce) Cognito → Browser: redirects to login UI Browser → Cognito: user authenticates Cognito → Browser: 302 redirect back to your redirect_uri with ?code=...&state=... Frontend → Cognito: POST /oauth2/token (with code_verifier) Cognito → Frontend: returns tokens Frontend → API: Authorization: Bearer <access_token> The actual redirect trace (Cognito Hosted UI) 1) Your app sends the user to the authorization endpoint Cognito’s authorize endpoint is: https://<your-domain>.auth.<region>.amazoncognito.com/oauth2/authorize Example: GET https://YOUR_DOMAIN.auth.eu-west-2.amazoncognito.com/oauth2/authorize ?client_id=YOUR_CLIENT_ID &response_type=code &scope=openid%20email%20profile &redirect_uri=https%3A%2F%2Fapp.yoursite.com%2Fauth%2Fcallback &state=RANDOM_CSRF_STRING &nonce=RANDOM_NONCE &code_challenge=BASE64URL_SHA256(code_verifier) &code_challenge_method=S256 Key points: response_type=code is the flow you want. scope must include openid if you want OIDC identity (ID token). state is for CSRF protection (you must validate it on return). nonce is for ID token replay protection (you must validate it if present). code_challenge_method=S256 is the safe PKCE mode. 2) Cognito redirects back with a code After login, Cognito redirects to your callback: https://app.yoursite.com/auth/callback?code=SplxlOBeZQQYbYS6WxSbIA&state=RANDOM_CSRF_STRING At this point, you have no tokens yet. You only have a code. 3) Your app exchanges the code for tokens Cognito’s token endpoint: https://example.com.auth.<region>.amazoncognito.com/oauth2/token Request: POST /oauth2/token Content-Type: application/x-www-form-urlencoded grant_type=authorization_code& client_id=YOUR_CLIENT_ID& code=THE_CODE_YOU_GOT& redirect_uri=https%3A%2F%2Fapp.yoursite.com%2Fauth%2Fcallback& code_verifier=YOUR_ORIGINAL_CODE_VERIFIER If it works, Cognito returns tokens. And here’s a detail people miss: In Cognito, the authorization code grant is the only flow that can return ID + access + refresh tokens together. PKCE explained like a normal person PKCE exists because SPAs are “public clients.” You can’t safely hide a client secret in a browser. So PKCE binds the flow to the client that started it. You generate: code_verifier = random high-entropy string (store it temporarily) code_challenge = BASE64URL(SHA256(code_verifier)) You send code_challenge in /authorize, and later you send code_verifier to /token. The server checks they match. That blocks “stolen code” attacks. Important: use S256 and don’t accept downgrades to “plain.” RFC 7636 explicitly warns against downgrade behavior. Tokens: what you got back and what they’re for Cognito returns up to three tokens: ID token (JWT) Identity. “Who is this user?” Used by your frontend to show user info and confirm the login session. Access token (JWT) Authorization. “What can this user do?” Used in Authorization: Bearer ... to call APIs. Cognito access tokens include scopes/groups/claims and are meant for access control. Refresh token (opaque) Session continuation. Cognito refresh tokens are encrypted and opaque (you can’t decode them like JWTs). Token lifecycle diagram (Figma spec) Draw three horizontal bars: ID token (short), access token (short), refresh token (long). Annotate: access token expires quickly refresh token used to mint new access token ID token not used for API auth The 8 production pitfalls that waste weeks 1) Callback URL mismatch (the classic) Cognito is strict about redirect_uri. If your request callback URL doesn’t exactly match what you configured, it fails. This is the number one “it works locally but not in prod” issue. 2) Confusing user pool domain vs discovery domain Cognito login endpoints live on your user pool domain, but discovery endpoints live on a different hostname: https://cognito-idp.<region>.amazonaws.com/<userPoolId>/.well-known/openid-configuration If you’re verifying tokens, you’ll likely also need JWKS: https://cognito-idp.<region>.amazonaws.com/<userPoolId>/.well-known/jwks.json 3) Skipping state validation If you don’t validate state, you’re inviting CSRF-style attacks into your login flow. Your “Sign in” becomes “Sign in as someone else.” 4) Skipping nonce validation OIDC says if nonce is in the ID token, the client must verify it matches what was sent. It’s there to mitigate replay. 5) Wrong scopes If you forget openid, you’re not doing OIDC properly, and you’ll wonder why you’re not getting an ID token or user identity claims. 6) Expecting refresh tokens from the wrong flow If you use the wrong grant/flow, you’ll miss refresh tokens and then you’ll “mysteriously” log users out constantly. Cognito specifically notes the authorization code grant as the path to all three token types. 7) Token storage decisions you’ll regret If you throw tokens into localStorage because it’s easy, you just made XSS dramatically more expensive. This isn’t theoretical. It’s why OAuth security guidance keeps pushing safer patterns and deprecating weaker modes. 8) Treating the ID token as an API credential Your API should validate access tokens, not ID tokens. Different purpose, different semantics. My security posture for SPAs (what I do in real life) This is the part that separates “I got it working” from “I can defend it.” What I refuse to do Store tokens in localStorage as the default Use implicit flow “because it’s easier” Skip state/nonce checks Let the frontend “self-validate” auth without server confirmation for protected actions OAuth security best practice has been evolving for years, and the modern posture is pretty clear: use authorization code + PKCE, avoid legacy patterns, and implement the checks that exist for a reason. What I do instead (practical options) Pick one based on your architecture: Option A: Backend-for-Frontend (BFF) Frontend never touches refresh tokens. Backend stores tokens server-side and issues a session cookie (HttpOnly, Secure, SameSite). Best for serious products where auth is business-critical. Option B: SPA-only with tight controls Use code+PKCE. Store tokens in memory (not persistent storage). Rotate with refresh token only if absolutely necessary, and treat XSS prevention as a top-tier requirement. If you’re a founder: Option A is usually worth it because it reduces breach blast radius and compliance stress. Debug checklist (copy this into your runbook) When login fails: Confirm you’re hitting the correct /oauth2/authorize endpoint and domain. Confirm redirect_uri matches config exactly (scheme, host, path, trailing slash). Confirm you are generating and sending: state and validating it on return nonce and validating it against the ID token claim PKCE S256 (code_challenge_method=S256) and sending code_verifier to /oauth2/token Confirm the token exchange is against /oauth2/token and includes the same redirect_uri. Validate tokens using Cognito JWKS and discovery endpoints (don’t hardcode keys). Confirm your API expects the access token, not the ID token. Closing thought Auth is a revenue feature disguised as plumbing: If it breaks, users churn. If it’s weak, you inherit existential risk. If it’s well-designed, you buy speed everywhere else.
Read more →Serverless Computing with React: taking the advantage of another cloud computing trend
In the React-dominated world, we have to acknowledge the fact that serverless makes our lives, as developers, easier as we take an advantage of such an abstraction as a serverless computing. This short article seeks to give a general overview on serverless computing and how to take an advantage of it on React. What is Serverless? To make sure we can move forward with our discourse, I must ensure that my reader understands what serverless is. In contrast to its name, serverless computing doesn't mean there are no servers involved. Instead, it abstracts the server management away from developers, allowing them to focus solely on writing code and building features. The infrastructure is handled by a cloud provider, empowering developers to deploy and scale applications without the burden of managing servers. Serverless and React Integration The marriage of serverless computing and React brings forth a potent blend of efficiency and scalability. React, with its component-based architecture, seamlessly aligns with the serverless model, enabling developers to create responsive and dynamic user interfaces without the traditional constraints of server management. Some of the advantages of serverless with React: 1. Cost Optimization Heard of the concept 'pay-as-you-go'? That is how serverless computing is ensuring you only pay for the compute resources you consume. React's efficient rendering further optimizes the usage of resources, making it a cost-effective solution for projects of any scale. 2. Rapid Development Did you know that serverless computing accelerates development cycles by abstracting server management? That's right, React's declarative syntax and modular components enhance this speed, allowing developers to iterate quickly and deliver features at an unprecedented pace. 3. Automatic Scaling Serverless platforms handle auto-scaling effortlessly, ensuring your application scales dynamically with user demand. React's virtual DOM and efficient rendering contribute to a smooth and responsive user experience even during traffic spikes. Wrapping it up As we all embrace serverless computing in pair with React, we open the door to a future where development is agile, scalable, and cost-efficient. The serverless-React synergy empowers us to focus on crafting exceptional user interfaces while leaving the infrastructure complexities to the cloud.
Read more →Chat GPT 3.5: Now Talking
A couple of days ago I decided to check out a piece of information I had to learn more about and proceeded with ChatGPT - New! over the traditional Google. When logged into my account, I noticed a headphones icon which was not there before, at least in the free version of the Chat, and I began exploring it immediately. In that moment, my initial pursuit took a backseat as my focus shifted towards exploring this new feature of 'reading aloud'. It got me thinking just how soon OpenAI managed to release such a useful daily tool as reading aloud. Essentially, now I can ask my ChatGPT a question, and it will read its response aloud, saving me time and, most importantly, keeping me a company. Fascinating. Every evening I allocate 90 minutes to challenging myself with Leetcode problems to stay sharp in data structures and algorithms. So, I thought, why not turn this evening into a discussion session between myself and my ChatGPT? - and so we discussed the best approaches to solving a challenging LeetCode problem - Burst Balloons. While ChatGPT may not be flawless and may occasionally provide incorrect responses, it still serves as a good tool for testing algorithms, just like it tested Dynamic Programming for 'Burst Balloons'. Upon exploring ChatGPT's (relatively) new feature, I found that it offers the following: It generates text responses (as usual) based on user input and simply translates them to speech. It recognizes multiple languages, same as before only now it can interpret them all to speech. It supposedly recognizes your unique voice (this is only my assumption, but voice authentication has been around for a long time now, so it's a valid guess). It supports 37 languages making communication easier for bilinguals. My favorite voice is Cove; he sounds professional and speaks at a well-paced rhythm. Overall, I enjoyed talking to ChatGPT, partly because I can seamlessly switch between English, Russian, and French, and it understands me just as well as before when I chat to it via text. Using the read-aloud feature for testing out speeches for stakeholder meetings is an interesting practice worth trying out. The potential for natural and intuitive conversations feels closer than ever. I generally lean towards human-generated content rather than AI-generated, so I'll continue to write my articles by typing (or voice-recording), however I appreciate how AI is changing and bringing sought-after daily tools for improved human performance. It will be a long while before AI can dominate the Matrix because, for now, humankind remains in control.
Read more →