I am U-Zyn Chua. I build, research and write about technology, AI and the open web.
Posted on :: Tags: , ,

Since the introduction of Perplexity's Deep Research on Feb 14, 2025, now the 3rd major deep research AI offerings after OpenAI ChatGPT's, and Google Gemini's, I have been using it rather extensively. It helps greatly on a research project I am working on.

Early findings on Perplexity Deep Research, it often takes 5-20 minutes, not quite 2-4 minutes as advertised. Perhaps it is due to the spike in usage since the introduction, or perhaps it is a tweak to make it appear as if it's working comparatively hard as the other 2 Deep Research offerings. Overall, I do not have an issue with the response time. I can also see its thought process throughout the thinking time.

I am a subscriber of ChatGPT Plus ($20/month) and Perplexity Pro ($20/month). As such, I do not yet have access to ChatGPT Deep Research, which, as of today, is only available to ChatGPT Pro ($200/month) subscribers.

My preliminary findings on Perplexity Deep Research after about 24 hours of rather extensive use.

  1. It takes 5-20 minutes and hardly 2-4 minutes, as mentioned.

  2. It likes Reddit a lot. The cited sources tend to include Reddit a lot if you allow social sources. On the web version, I would often explicitly exclude social sources, but on the mobile or desktop version, it seems to be lacking such an option – I prefer including both web and academic sources but not social sources.

  3. Its response seems rather brief (~1000 words) as compared to what I have seen on ChatGPT's Deep Research. It often leaves me wanting to learn more by asking follow-up questions. I am on the fence if this is better, as a brief response allows me to consume its response within 2-3 minutes, as compared to what I seen some ChatGPT's Deep Research users do, feeding its response to Google's NotebookLM to help in digesting its very lengthy response.

  4. Perplexity goes into deep research mode on the first query immediately, without any clarifications first, like that observed on ChatGPT Deep Research. I do not like this as much for the following 2 reasons:

    1. It forces the user to be highly verbose in prompting because you do not get a chance to clarify after LLM's first take. This takes up a lot more effort and guesswork. If you miss it, you get only to correct yourself 15 minutes later.
    2. It makes a wrong interpretation of your intention and gives you output that is not accurate. One example for mine would be as I asked it to research the early history of the web, and it covered broadly on the ARPAnet and Internet.
  5. It has no usage cap, which is good!

  6. UX issue: I like to see its chain of thoughts, but if you navigate away, often because you want to start a new thread as Perplexity works, you are unable to get back to an in-progress thread before the response is generated – which would then hide the thought process.

  7. Bug: This is likely a teething issue. I would be losing threads or find the same queries being generated multiple times.

Overall, I am a happy user. I am looking forward to when ChatGPT's Deep Research is made available to Plus users. I now split my usage as follows:

  1. ChatGPT 4o/o1 for usual queries, quick research and planning tasks.
  2. ChatGPT o1/o3 or Claude for coding assistance.
  3. Perplexity Deep Research for... deep research with citations.

Read more