Free Tool

Free URL Encoder & Decoder

Encode or decode any URL instantly. Handle special characters, query strings, fragments, and percent-encoding. Component-mode and full-URL mode for the right escape rules. Free, browser-only, copy-paste ready.

Component mode: encodeURIComponent / decodeURIComponent. Encodes everything reserved. Use for query string values, path segments, and fragment text.
0 chars
0 chars
No signup required
Free forever
GDPR compliant
Powered by U2L

Quick Answer

URL encoding (also called percent-encoding) converts characters that are reserved or unsafe in URLs into a percent-prefixed hex sequence (space becomes %20, & becomes %26, etc.). The U2L URL Encoder/Decoder runs both directions, supports component-only mode (for query string values) and full-URL mode (preserves valid structure), and stays in your browser - no data sent anywhere.

Quick Facts

  • Defined by RFC 3986. Reserved characters (: / ? # [ ] @ ! $ & ' ( ) * + , ; =) and unsafe characters (space, <, >, etc.) must be percent-encoded inside URL values.
  • Two encoding modes: encodeURI() preserves URL structure (doesn't encode :/?#); encodeURIComponent() encodes everything reserved (correct for query string values).
  • Spaces can be encoded as %20 or +. Both work in query strings; %20 is required in path segments. The tool defaults to encodeURIComponent which uses %20.
  • Unicode characters are encoded as their UTF-8 byte sequence in percent form: 'café' becomes 'caf%C3%A9' (the é is two UTF-8 bytes 0xC3 0xA9).
  • Internationalized Domain Names (IDN) use Punycode for the host portion (xn--), not percent-encoding. This tool handles the path/query parts; IDN conversion is separate.
  • Decoding is straightforward: each %XX sequence becomes its byte value, and consecutive bytes are decoded as UTF-8. Malformed sequences (lone %, %X, etc.) throw or get preserved.
  • Browsers automatically decode percent-encoded URLs in the address bar for display, but transmit the encoded form. Copy-paste from address bars usually gets the decoded version.

How to encode or decode a URL

Three steps. Pick the mode, paste the input, copy the output.

  1. 1

    Pick encode or decode

    Encode: convert raw text into percent-encoded form (e.g. 'hello world' → 'hello%20world'). Decode: convert percent-encoded form back to raw text. The tool runs both directions.

  2. 2

    Choose component or full-URL mode

    Component (encodeURIComponent): encodes everything reserved. Use for query string values, path segments, fragment text. Full URL (encodeURI): preserves URL structure (:/?#=&) - safe for whole URLs but won't fix already-broken values.

  3. 3

    Paste input, copy output

    Output updates live as you type or paste. Tap Copy to grab the result. Reset clears both fields. Everything happens in your browser - the input never leaves your device.

What is a URL Encoder / Decoder?

URL Encoder / Decoder is a tool that converts text to and from URL percent-encoded form. URL encoding (percent-encoding) is the mechanism defined by RFC 3986 for representing characters that are reserved or unsafe inside URLs. Spaces become %20, ampersands become %26, slashes become %2F (when in a value, not as a path separator). The U2L Encoder/Decoder runs both directions in your browser with no data sent to U2L servers.

URLs have a strict syntax. Some characters (the colon between scheme and host, the slashes between segments, the question mark before query) are 'reserved' for structural roles. Other characters (spaces, control characters, characters above ASCII) aren't safe to include directly because of how systems and protocols handle text. Percent-encoding solves both problems: each problematic character becomes a percent sign followed by its hex byte value, and the encoded form is always safe to include anywhere in a URL.

There are two practical modes. Full-URL encoding (encodeURI in JavaScript, urllib.parse.quote with safe='/?:@#&=' in Python) preserves URL structure - colons, slashes, and the query separator pass through unchanged. Component encoding (encodeURIComponent in JavaScript, urllib.parse.quote with default safe in Python) encodes everything reserved, so the output is safe to embed inside a URL value. The choice depends on whether you're processing a whole URL or just a value going into one.

Decoding is simpler: each %XX sequence is looked up as a hex byte, sequences combine into UTF-8 character data, and the result is decoded text. The U2L tool handles malformed input gracefully (it shows an error rather than throwing), supports very long inputs (up to ~1MB), and runs entirely in your browser so sensitive data (auth tokens, session IDs, internal URLs) never leaves your device.

How does a URL Encoder / Decoder work?

When you type or paste input, the tool calls the JavaScript built-in encodeURIComponent (component mode) or encodeURI (full-URL mode) for encoding, and decodeURIComponent for decoding. These are spec-compliant per RFC 3986 and produce identical output to Python's urllib.parse.quote, Java's URLEncoder, .NET's Uri.EscapeDataString, and equivalent functions in every modern language.

Component mode encodes 35 ASCII characters: space, !, ', (, ), *, plus all reserved characters except those that JavaScript's encodeURIComponent leaves unencoded by spec (-, _, ., ~, alphanumeric). Full-URL mode (encodeURI) leaves an additional 11 characters unencoded: # $ & + , / : ; = ? @. Use full-URL when the input is a whole URL and you want existing structure preserved; use component when the input is a value going into a query string or path segment.

Unicode characters above ASCII are encoded as their UTF-8 byte sequence. The character 'é' (U+00E9) encodes as %C3%A9 because its UTF-8 representation is the two bytes 0xC3 0xA9. The Chinese character '中' (U+4E2D) encodes as %E4%B8%AD. The emoji '🚀' (U+1F680) encodes as %F0%9F%9A%80. UTF-16 surrogate pairs are handled automatically.

Decoding reverses the process: each %XX is converted back to its byte value, consecutive bytes assemble into UTF-8 sequences, and UTF-8 sequences decode to characters. Malformed input (lone %, incomplete %X, invalid UTF-8) triggers an error message rather than corrupting the output. The tool processes up to ~1MB of input client-side; larger inputs may slow your browser but are not blocked.

Use Cases

How marketers, businesses, and developers use url encoder / decoder.

Building API request URLs

When constructing URLs that include user-supplied data (search queries, IDs, dynamic paths), every value must be percent-encoded before insertion. The encoder produces correct output every time, eliminating bugs from manually concatenating raw strings.

Debugging webhook payloads

Webhook URLs often include encoded query strings. When investigating why a webhook didn't fire, decoding the URL reveals the exact data that was sent. The tool decodes complex chains (encoded twice, triple-encoded, etc.) reliably.

Generating UTM-tagged URLs by hand

When the UTM Builder or Bulk UTM tools aren't an option, you can manually URL-encode UTM values to ensure spaces, ampersands, and special characters in campaign names don't break the URL.

Decoding shortened URLs that contain encoded targets

Some short link services encode the destination URL inside their query string (?u=https%3A%2F%2Fexample.com%2F...). Decoding reveals the actual destination, useful for security inspection or verification.

Reading server logs

Web server access logs record URLs in their encoded form. To match log entries to the actual user actions, you decode the URL and read the raw query strings, including search terms, form submissions, and tracked events.

Constructing wa.me / mailto: / sms: links

WhatsApp wa.me URLs require percent-encoded message bodies. mailto: and sms: links use the same encoding. The tool produces correct encoded output for prefilled message text containing special characters or non-English characters.

Encoding non-English domain names and paths

International websites with Unicode in their paths (Chinese, Hindi, Arabic, etc.) need each character percent-encoded for cross-platform compatibility. The encoder handles UTF-8 byte sequences automatically.

API documentation and example URLs

Technical writers documenting API endpoints often need to show URLs with sample payloads. Properly encoded examples prevent confusion when readers copy-paste them. The tool helps generate clean, copy-pasteable documentation URLs.

Decoding QR code payloads

Some QR codes encode URLs with double-encoding (paranoid encoding for safety). Decoding the QR's content shows the actual destination. Useful when investigating suspicious QR codes or debugging QR-driven flows.

Penetration testing and security research

Security researchers test how applications handle URL-encoded input (e.g. SQL injection payloads, XSS, path traversal). Encoders that handle malformed input cleanly are part of the standard toolkit.

URL Encoder / Decoder vs Alternatives

Side-by-side feature and pricing comparison with the top alternatives.

FeatureU2LURLEncoder.orgURL DecoderOnline URL EncoderBrowser dev tools
Free, no signup
Component vs full-URL mode
Encodes UTF-8 / Unicode
Live encoding as you typeManual
Data stays in browserUnclearUnclearUnclear
Handles malformed inputErrorErrorErrorThrows
Up to 1MB inputLimitLimitLimit
Bidirectional (one tool, both modes)

URL Encoder / Decoder vs URLEncoder.org

URLEncoder.org is one of the longest-running free URL encoders. It works, but the UI hasn't changed in a decade. Functional but not pleasant to use repeatedly.

U2L's encoder/decoder offers identical core functionality with live encoding as you type, both modes (component / full-URL) clearly differentiated, and an explicit promise that data stays in your browser. For developers running encoders dozens of times per day, the small UX wins compound.

URL Encoder / Decoder vs Browser dev tools console

Every browser ships encodeURIComponent() and decodeURIComponent() in the JavaScript console. The output is identical to U2L's. For one-off use during dev sessions, the console is faster.

U2L's encoder is for non-dev contexts (sharing encoded values, double-checking before publishing, working in non-coding tabs) and for handling edge cases (component vs full-URL distinction, large input, malformed-input recovery) that the bare console doesn't surface clearly.

Best Practices

Use component mode for query string values

Encoding 'hello world' for a query string parameter ?q=hello%20world should use component mode. Full-URL mode would skip the space encoding (because it's safe in a path) and produce 'hello world' which is broken in a query string.

Use full-URL mode when sanitizing whole URLs

If you have a partly-encoded URL with structural characters (://?#) that should remain intact, full-URL mode preserves them while encoding spaces and Unicode. Component mode would over-encode and break the URL.

Encode UTF-8 characters as bytes, not codepoints

The character 'é' encodes as %C3%A9 (its UTF-8 byte sequence), not %E9 (its codepoint). Languages and frameworks that get this wrong produce URLs that fail on different servers. The tool handles UTF-8 correctly by default.

Don't double-encode by accident

If a URL is already encoded (e.g. %20 for space) and you encode again, you get %2520 (% encoded as %25, then 20). Either decode first or skip encoding the already-safe %20 sequences. The tool's preview shows whether the input is already encoded.

Match casing to your downstream system

Percent-encoding is case-insensitive per spec, but some systems (legacy CGI scripts, custom backends) compare strings literally. Use uppercase hex (%20 not %20) by default; some systems expect lowercase. Pick one and stick to it.

Don't encode + as %2B in query strings

In query strings (after the ?), + is a valid alternative encoding for space. Some servers decode + as space; some don't. To avoid confusion, use %20 explicitly via component mode rather than relying on +.

Validate the decoded output

If you decode an attacker-supplied encoded string, validate the result before using it (e.g. don't pass directly to SQL, eval, or shell). Decoding can produce strings with control characters, null bytes, or other dangerous content.

Use Punycode for international domain names

URL percent-encoding handles paths and queries; the host portion uses Punycode (xn--) for non-ASCII characters. 'café.com' becomes 'xn--caf-dma.com'. Don't try to percent-encode the host - use a Punycode converter.

Common Mistakes to Avoid

Encoding the whole URL with component mode

https://example.com/?q=hello → https%3A%2F%2Fexample.com%2F%3Fq%3Dhello with component mode. Now the URL is unrecognizable as a URL. Use full-URL mode for whole URLs; use component mode for individual values.

Forgetting that + means space in query strings

?name=John+Doe and ?name=John%20Doe are equivalent in query strings. But they're different in path segments (?name=John+Doe in a path means literal '+'). Always use %20 if you're not sure, since it works everywhere.

Double-encoding without realizing

Encoding %20 produces %2520. Encoding 'hello world' twice produces hello%2520world. Always check whether your input is already encoded before re-encoding.

Encoding the # character in fragments

The # marks the start of a fragment in a URL. Encoding it as %23 makes it part of the path or query, not the fragment marker. If you have literal # in path content, encode it; if it's marking a fragment, leave it.

Treating Unicode codepoints as bytes

The character 'é' is one codepoint but two UTF-8 bytes. Encoding as the codepoint (%E9) gives Latin-1 encoding which most modern systems reject. Always encode as UTF-8 bytes (%C3%A9). Modern encoders default to UTF-8 correctly.

Pasting URLs from terminal output

Some terminals wrap long URLs with line breaks or insert special characters. Always paste from the original source; visually-identical pasted URLs may include hidden Unicode that breaks decoding.

Not URL-encoding form data

Form data sent as application/x-www-form-urlencoded must have every value percent-encoded. application/json doesn't need URL encoding (it's its own format). Confusing the two breaks request parsing.

Technical Specifications

SpecificationRFC 3986 (URI Generic Syntax). Older RFCs (1738, 2396) defined earlier versions; RFC 3986 is current.
Reserved characters (full-URL mode preserves): / ? # [ ] @ ! $ & ' ( ) * + , ; =
Always-encoded charactersSpace (and other control / non-ASCII) plus reserved when not used structurally
Unicode handlingUTF-8 byte sequences; %XX per byte. Surrogate pairs handled automatically.
Maximum input size~1MB; larger may slow the browser but is not blocked
JavaScript built-ins usedencodeURI, encodeURIComponent, decodeURI, decodeURIComponent
EquivalenceOutput identical to Python's urllib.parse.quote, Java's URLEncoder, Go's url.QueryEscape, .NET's Uri.EscapeDataString
Edge cases handledEmpty input, malformed % sequences, partial UTF-8, surrogate pairs, very long input

Industry-Specific Use Cases

Web development and engineering

Daily use building URLs in CMS templates, debugging webhook payloads, generating API request URLs. The tool is one of the rare 'utility tools' that engineers actually keep open in a browser tab.

QA and integration testing

Testing REST APIs, validating webhook signatures, decoding logged URLs to find bugs. The encoder/decoder is part of every API tester's standard toolkit alongside Postman and curl.

Marketing and growth ops

Building UTM-tagged URLs by hand, decoding ad-platform tracking URLs to understand what data they're capturing, sharing properly-encoded URLs in slack/email without breaking them.

Technical writing and documentation

Generating clean URL examples for API docs, blog posts, and tutorials. Properly encoded URLs in documentation prevent reader-side bugs when they copy-paste examples.

Security research and pentesting

Crafting payloads with URL-encoded special characters (path traversal, SQLi, XSS), testing how applications handle encoded input. Always part of the security researcher's hands-on toolkit.

DevOps and SRE

Reading and parsing access logs, debugging URL-based routing, building URL fragments for CI/CD systems. The tool fits between curl and grep in the standard URL-debugging workflow.

Frequently Asked Questions

What is URL encoding?

URL encoding (also called percent-encoding) converts characters that are reserved or unsafe in URLs into a percent-prefixed hex sequence. Spaces become %20, ampersands become %26, etc. Defined by RFC 3986. The mechanism every web framework, browser, and HTTP client uses to safely transport text inside URLs.

What's the difference between component and full-URL mode?

Component (encodeURIComponent) encodes everything reserved - safe for query string values. Full URL (encodeURI) preserves URL structure (://?#&=,;@:) - safe for whole URLs. Use component when encoding a value going into a URL; use full-URL when sanitizing a complete URL.

Why does space sometimes encode as %20 and sometimes as +?

In path segments, space must be %20. In query strings (after the ?), both work and many servers treat them as equivalent. To avoid confusion across systems, use %20 always. The tool defaults to %20.

How are Unicode characters encoded?

As their UTF-8 byte sequence in percent form. The character 'é' (U+00E9) encodes as %C3%A9 because its UTF-8 representation is two bytes (0xC3 0xA9). Modern encoders default to UTF-8; this tool follows that convention.

What happens if my input is already encoded?

The encoder treats it as raw text and re-encodes the existing % sign too, giving you double-encoded output (%20 becomes %2520). To avoid, decode first if the input might be encoded, or use full-URL mode which is more conservative about re-encoding.

Why am I getting an error when decoding?

Malformed sequences (lone %, %X without a second hex digit, invalid UTF-8) trigger errors. The browser's decodeURIComponent throws on these; we catch the error and show a friendly message. Check for stray % signs or truncated sequences.

Is the data sent to U2L servers?

No. The encoder/decoder runs entirely in your browser using JavaScript built-ins. Input is never sent to U2L servers, logged, or stored. You can verify in browser dev tools by checking the Network tab.

Does it support reverse-engineering double-encoded URLs?

Yes - decode twice. Some systems double-encode for paranoid safety; %2520 decodes to %20 first, then to a space. The tool runs decode once per submission; click Decode again to handle nested encoding.

What's the largest input it handles?

About 1MB of text. Beyond that, browser memory and CPU may slow significantly. For very large inputs (server logs, big CSVs), use a server-side tool or streaming approach.

Does this work for all languages and character sets?

Yes. UTF-8 is the universal encoding for URLs and the tool supports any character UTF-8 supports - Latin, Cyrillic, Chinese, Arabic, Hindi, emoji, etc. Each character encodes as its UTF-8 byte sequence.

How do I encode a URL fragment (after #)?

The # itself is the fragment delimiter and stays unencoded. Content after # follows the same rules as other URL parts - use component mode for values going into the fragment, full-URL mode if the fragment is itself a URL or contains structural characters.

What about international domain names (xn--)?

The host portion of a URL uses Punycode (xn--) for non-ASCII characters, not percent-encoding. 'café.com' becomes 'xn--caf-dma.com'. Use a Punycode converter for the host; use this tool for the path and query.

Why are some characters not encoded?

Per spec, alphanumeric and a few unreserved characters (- _ . ~) are never encoded. They're safe everywhere in a URL. Including them encoded (-, %2D) is technically valid but pointless and may confuse some servers.

How does this differ from HTML entity encoding?

HTML entities encode characters for HTML body content (&amp; for &, &lt; for <). URL encoding encodes characters for URL transport (%26 for &, %3C for <). Different purposes; don't mix them. This tool is URL encoding only.

Can I encode a JSON payload as a URL parameter?

Yes, in component mode. URL-encode the entire JSON string (which contains {, }, ", :, ,, etc.) as a single query value. The receiving server URL-decodes the value, then JSON-parses the result.

Why does my encoded URL look different from another tool's output?

Most tools agree on UTF-8 encoding for non-ASCII and standard percent-encoding for reserved characters. Differences usually come from: case (UPPER vs lower hex), strict vs loose handling of unreserved chars (-_.~), or component vs full-URL mode choice. The tool follows JavaScript's built-in conventions.

Is there a fee?

No. URL encoding is an open standard; running encoders is free. The tool runs in your browser using built-in JavaScript functions - no server costs to pass on.

Does it work for path traversal (%2F = /)?

Encoding / as %2F is valid, and most modern servers treat the encoded form as a literal slash (not a path separator). This is useful when you have a literal / inside a path segment value. Some legacy systems decode %2F too aggressively; test with your specific server.

Can I use it offline?

Once the page loads, yes - all the encoding happens in your browser. If you disconnect from the internet after loading the page, encoding/decoding still works. For repeated offline use, save the page and use it as a local file.

What if I need to encode a binary payload?

URL encoding works for arbitrary bytes - each byte 0x00-0xFF can be percent-encoded. For binary data in URLs, percent-encode each byte. For larger binary data, base64 encoding is more efficient (smaller output, no % overhead per byte).

Key Terms

Percent-encoding
The mechanism for representing reserved or unsafe characters in URLs as % followed by the hex byte value. Also called URL encoding. Defined by RFC 3986.
Reserved characters
Characters with structural meaning in URLs: : / ? # [ ] @ ! $ & ' ( ) * + , ; =. Encoded when used as values; preserved when used structurally.
Unreserved characters
Characters that are always safe in URLs and never need encoding: alphanumeric and the four marks - _ . ~. Encoding them is valid but unnecessary.
Component mode (encodeURIComponent)
The strict encoding mode that escapes everything reserved. Safe for query string values, path segments, fragment text. Use when the input is a value going into a URL.
Full-URL mode (encodeURI)
The lenient encoding mode that preserves URL structure (:/?#=&). Safe for whole URLs but won't fix already-broken values inside them.
UTF-8 encoding
The character encoding used to convert Unicode characters to byte sequences before percent-encoding. Universal for modern URLs. Each non-ASCII character becomes 2-4 percent-encoded bytes.
Punycode
The encoding used for non-ASCII characters in domain names (xn--). Different from URL percent-encoding; applies only to the host portion of a URL.

Need to handle URL encoding at scale?

U2L's API processes encoded URLs as part of every short link, UTM build, and analytics call. Free tier covers most uses; upgrade for higher volume and dedicated support.

See the U2L API