Forgot your password?
typodupeerror
Chrome AI

Chrome Now Lets You Turn AI Prompts Into Repeatable 'Skills' 22

Google is rolling out a Chrome feature called "Skills" that lets users save Gemini prompts as reusable one-click workflows they can run across multiple tabs. The feature also includes preset Skills from Google. It's launching first for Chrome desktop users set to US English. The Verge reports: Once you have access to the feature, it can be managed by typing a forward slash ( / ) in Gemini and clicking the compass icon. AI prompts can be saved as Skills directly from your Gemini chat history on desktop, where they'll then be available to reuse on any other desktop devices that are signed into the same Google account on Chrome.

The aim is to spare Chrome users from having to manually retype frequently used Gemini prompts or having to copy and paste them over from a saved list. Some of the Skills made by early testers include commands for calculating the nutritional information of online recipes and creating a side-by-side comparison of product specifications while shopping across multiple tabs, according to Google.

The company is also launching a library of preset Skills that you can save and use instead of making your own. These ready-to-use Skills can also be customized to better suit your needs, providing a starting point without requiring you to create your own from scratch.
This discussion has been archived. No new comments can be posted.

Chrome Now Lets You Turn AI Prompts Into Repeatable 'Skills'

Comments Filter:
  • oh god (Score:5, Insightful)

    by SumDog ( 466607 ) on Tuesday April 14, 2026 @03:31PM (#66093674) Homepage Journal
    Prediction: this "feature" will very rapidly be abused by people who inject instructions, probably in HTML comments or white-on-white text. Google will try to mitigate exploits for several months before finally giving up after larger and larger data breaches and finally remove it from their browser.
  • I can remember to hit ^F but / was nicer.

    Obligatory: You can ask it anything / Great, ask it to fuck off

  • by ZERO1ZERO ( 948669 ) on Tuesday April 14, 2026 @03:41PM (#66093694)
    Im struggling to think of any prompts inwould need to re run, much less prompts built into the browser. Does anyone have any food examples? Also the whole indeterminate nature seems a but off putting. Its possible incould re run a prompt but get different results?
    • Re: (Score:3, Insightful)

      by fatwilbur ( 1098563 )

      food examples

      I regularly ask AI for the right temp and time to reheat things in my air fryer, so that seems like a great use of this function and associated compute resources.

    • How about turning copied text into plain ASCII so it posts cleanly here?

    • by unrtst ( 777550 )

      Im struggling to think of any prompts inwould need to re run, much less prompts built into the browser.

      IMO, that's one of the biggest issues here. If there was some fancy operation that more than a few people might run regularly (IE: the included prompts), it'd likely be better served by a traditional application feature. As a dumb example, if there was no feature to search across all tabs, but they added a prompt to search across all tabs, that'd be stupid - just implement search across tabs and save all the LLM overhead.

      I'm curious what features would be common enough to be included while also necessitatin

    • Document a procedure you want a human to follow. If it's too vague, different humans using your procedure may get different results. If it's specific, like a recipe, that should not happen. Same thing here. These are literally just human-readable procedures for doing "stuff".
  • you're welcome.
  • The idea of saving a "skill" is great. Instead of prompting the system to create a Bash/Powershell script, you just save the script results and there's no need to re-prompt and recalculate everything.

    But, this "skill" is just saving the prompt. The clanker still has to reprocess and recalculate everything. It seems very wasteful unless the output is dynamic enough to require recomputing.

    • Skills have a meaning in the LLMiverse- and it does mean that prompt. So this skill is a skill.
      What you are calling a skill is a script produced by an LLM (possibly a skill fed to an LLM), but that is not a skill.
      Saving scripts is less useful than you think. The LLM will not, unless asked, try to produce something that works generically. It will create very fine-tuned things to solve the problem based on the example dataset it's got in front of it.
      If you want to save off and run scripts, you want plugins
  • Google invented Macros. How quaint.
  • Last I heard LLMs tended to give a different response each time you queried it.

    • Depends. Configurable.
      Decoding strategies vary from stochastic to greedy (non).
      Configurable per generated token (though usually just per prompt)
  • LLMXSS (Score:4, Insightful)

    by abulafia ( 7826 ) on Tuesday April 14, 2026 @10:13PM (#66094234)
    You implemented CORS. You managed to get CSP headers to stop throwing errors. You implemented CSRFs. Get ready for the all new LLM-based Cross-Site Scripting.

We don't really understand it, so we'll give it to the programmers.

Working...