By Toolkiya Team· April 17, 2026 · 8 min read

One month after launching Toolkiya — 97 free browser tools, real GSC numbers, what worked, what didn't

A month ago I launched Toolkiya— a single Next.js app with the first batch of free browser tools, everything running client-side, zero server cost. Since then it's grown to 97 tools. The launch post did better than I expected: ProductHunt listing, SaaSHub feature, a few Reddit threads, one Dev.to post that pulled ~8,000 views.

Today I ran the Google Search Console audit scriptI wrote over the weekend and stared at the numbers for an hour. Some of them surprised me. Some of them made me rethink what I thought was a "done" feature. Here's the honest recap.

The numbers (actual GSC data, last 28 days)

  • 97 tool pages live
  • 65 indexed by Google — 67% coverage. Why the other 33% aren't is below.
  • Top page: homepage — 195 impressions, 51 clicks, average position 6.2
  • Best-performing tool: /remove-background — 108 impressions, 17 clicks, position 3.5
  • Brand query "toolkiya" — 124 impressions, 52 clicks, position 1, 41.9% CTR
  • Biggest surprise: /rent-receipt at position 15 with 69 impressions. I almost deleted this tool during build because I thought nobody would use it. Turns out HRA tax proof in India is a huge long-tail query and I accidentally hit a gap.

Lesson 1: Client-side PDF libraries are more fragile than I thought

I shipped the PDF merge tool thinking pdf-libwas bulletproof. It isn't — not for real-world PDFs.

Within the first week, the bug reports rolled in. The pattern: "I tried to merge my bank statement PDF and got a cryptic error." Turns out, every Indian bank e-statement is password-encrypted (a security requirement). pdf-lib's PDFDocument.load() throws on encrypted inputs by default.

My original error handler:

try {
  const doc = await PDFDocument.load(bytes);
  // ...
} catch {
  setError("Error merging PDFs. Please check your files and try again.");
}

That generic catch {} was swallowing the real reason. I refactored to a shared loader used across five PDF tools:

export async function loadPdfSafe(file: File): Promise<PDFDocument> {
  const bytes = await file.arrayBuffer();
  try {
    return await PDFDocument.load(bytes);
  } catch (err) {
    const msg = err instanceof Error ? err.message : String(err);
    if (/encrypt/i.test(msg)) {
      try {
        // Many bank statements only have owner-password, not user-password.
        // ignoreEncryption lets us read those.
        return await PDFDocument.load(bytes, { ignoreEncryption: true });
      } catch {
        throw new Error(`"${file.name}" is password-protected. Please unlock it first.`);
      }
    }
    throw new Error(`Could not read "${file.name}". It may be corrupted or not a valid PDF.`);
  }
}

Now the user sees a specific, actionable error naming the exact file. Takeaway:generic error messages are a silent UX killer. In a client-side app the server can't log the error for you — the user sees the message and bounces.

Lesson 2: Mobile touch events are not free

The Screenshot Annotator tool shipped with mouse handlers only. Desktop users loved it. Mobile users reported "the blur tool does nothing" and "I can't add text".

I assumed touch events were auto-synthesized into mouse events — they are, but only for single taps, not drags. A drag on a canvas sends touchstart → touchmove → touchmove → touchend, never mousedown/mousemove/mouseup.

<canvas
  onMouseDown={...}  onMouseMove={...}  onMouseUp={...}
  onTouchStart={(e) => { e.preventDefault(); startDraw(pos(e.touches[0])); }}
  onTouchMove={(e) => { e.preventDefault(); moveDraw(pos(e.touches[0])); }}
  onTouchEnd={(e) => { e.preventDefault(); endDraw(); }}
  style={{ touchAction: "none" }}  // prevents page-scroll while drawing
/>

Another mobile bug: the text-input field used onBlur={commitText} to save. On mobile, opening the keyboard immediately blurs the input — so the text box committed an empty string and disappeared before the user could type. Replaced onBlurwith an explicit "Done" button.

Takeaway: every interactive tool needs a real finger-test on a phone. Emulators lie about touch behavior.

Lesson 3: HTTP 410 Gone is the SEO fix I was most afraid of

When I bought toolkiya.com, the previous owner had run a product listing site. Google had thousands of indexed URLs like toolkiya.com/?productulde39150.shtm and /m075000274?srsltid=.... Every week GSC emailed me new "404 errors" as Google re-crawled zombie URLs.

My initial fix was a plain 404 page. Bad idea — Google interprets 404s as "the page is temporarily missing, keep checking". The correct signal is HTTP 410 Gone: "this page is permanently removed, drop it".

// src/proxy.ts
const GONE_PATTERNS = [
  /\.(php|asp|aspx|cgi|jsp|shtm|shtml)$/i,
  /^\/products?\//i,
  /^\/m\d{6,}/i,                      // /m075000274
  /^\/\?product[a-z0-9]+/i,           // ?productxxx12345.shtm
  /\?.*\bsrsltid=/i,                  // Google Shopping tokens
  // ... 30+ more patterns
];

export function proxy(request: NextRequest) {
  const { pathname, search } = request.nextUrl;
  for (const p of GONE_PATTERNS) {
    if (p.test(pathname) || p.test(pathname + search)) {
      return new NextResponse(
        `<!DOCTYPE html><html>...</html>`,
        { status: 410, headers: { "X-Robots-Tag": "noindex, nofollow" } }
      );
    }
  }
  return NextResponse.next();
}

Plus explicit Disallow rules in robots.ts so Google stops re-crawling them at all. Within 10 days, the 404 error count dropped from 2,800 to under 200.

Takeaway: 404 ≠ 410. If a URL should never come back, use 410. One of the few SEO tricks that works within days instead of months.

Lesson 4: The feature I almost didn't build

The resume builderstarted as a side-quest. I thought nobody would use "yet another resume builder" when Canva exists.

Then the requests came in:

  • "Can I export to LaTeX?"
  • "Can I upload my old resume and let AI improve it?"
  • "Do you have Europass / Gulf region templates?"
  • "Can I use DOCX? My recruiter only accepts Word."

I ended up rebuilding the entire thing over two sessions:

  • 7 templates (Classic, Modern, Minimal, Executive, Europass for EU, Gulf for UAE/Saudi CVs with photo + DOB + nationality + marital status, Functional)
  • 3 paper sizes — A4, US Letter, US Legal
  • 3 export formats — PDF (pdf-lib), DOCX (docx npm), LaTeX (handwritten template)
  • Full customization — 8 accent colors, 3 font families, density, margins, divider style, bullet style, date-format normalizer, section reorder with hide/show toggles
  • LaTeX import — paste a .tex resume, AI extracts the fields
  • PDF upload — upload old resume, AI parses it, you edit

The full state lives in one ResumeDatatype that's passed to three parallel renderers. Any change to the form updates all three outputs consistently.

Takeaway: when users ask for something three times, it's not scope creep — it's a signal you picked the wrong MVP scope.

Lesson 5: 33% of my pages aren't indexed, and that's fine

Out of 97 tool pages:

  • 65 PASS — submitted and indexed
  • 26 "Discovered – not indexed" — Google knows the URL, chose not to index yet
  • 6 "URL is unknown to Google" — Googlebot hasn't reached them

I used to panic about this. I read every zero-to-hero SEO article, added structured data, wrote 800+ word content for every page. Some of it helped. But the real unlock was understanding: "Discovered – not indexed" is not a bug. It's a ranking signal.Google sees the page, checks authority, and says "not yet". The fix is external signals — backlinks, mentions, organic clicks — not more on-page tweaks.

The 5 highest-impact pages moved from "Discovered" → "PASS" in the 7 days after my launch post went live, even though I hadn't touched their content. The launch itself was the signal Google was waiting for.

Takeaway: on-page SEO has a ceiling. Past that, every hour is better spent on outreach than on schema tweaks.

What I'd do differently

  1. Shipped the GSC audit script on day one. I was running manual URL inspections in the GSC UI for weeks. A 240-line Node script using googleapis does it in 30 seconds: pulls sitemap status, top queries, per-URL index verdict, saves a JSON report you can diff week-over-week.
  2. Written the Chrome extension privacy policy before submitting. My extension was rejected twice for "privacy policy does not contain necessary information" — not because it's invasive (it's the opposite), but because the policy didn't explicitly cover the extension in addition to the website. Reviewers look for per-permission justification, host-permission disclosure, and Chrome Web Store Limited Use certification. Cost me two weeks.
  3. Invested in real mobile testing earlier. Chrome DevTools device emulator misses touch-event edge cases. I now test every new tool on an actual phone before shipping.
  4. Waited for real data before building regional resume templates. I built Europass and Gulf formats on a guess. The effort would have been better spent polishing the Classic template.

What's next

  • Monetization. AdSense is live but conservative — I want to experiment with a $3/mo Pro tier that adds cloud storage for resume drafts. Not sure yet if users want cloud storage on a privacy-first tool.
  • Firefox + Edge extension. The Chrome extension has been rejected twice. Parallel-launching on Firefox while Chrome review is pending.
  • More deep-dives. Writing more posts like this one. The top blog post (background remover tutorial) pulls 84 impressions/day — more than most tools.

If you're building anything browser-based and free, I'd love to compare notes. The margin on zero-cost products is 100% — but only if the distribution works. That's the part I'm still figuring out.

Toolkiya is live at toolkiya.com. All 97 tools are free, no signup, no upload. Got feedback? contact@toolkiya.com.

We show ads via Google AdSense to keep our tools free. Google may set cookies for ad personalization. Cookie Policy