funguy
Active member
Depends how much manual work you do over the auto generate work. If manual efforts are 0% then you are not going to get high ranks. It's kind of indirect penalisation.does Google currently penalize for it?
Depends how much manual work you do over the auto generate work. If manual efforts are 0% then you are not going to get high ranks. It's kind of indirect penalisation.does Google currently penalize for it?
That's a nice combination of tools.Yo guys, wonderful thread to share inputs.
I've tried copy.ai + quillbot + grammarly and it works awesome. Give ai just a Lil bit of human touch/ideas. It didn't detect as fake. Try it.![]()
Yea. Quillbot to rewrite the content from copy.ai, just a free account will do like the standard one.That's a nice combination of tools.
Do you use quillbot to rewrite content from Copy.ai?
Grammarly is a must IMO. I use that for everything, lol
Thanks for that added info. I haven't used quillbot in a while, time to revisit that tool.Yea. Quillbot to rewrite the content from copy.ai, just a free account will do like the standard one.
And yeah Grammarly is a must. Hehe. The last part is, just put some images too make it looks legit.![]()
I don't understand this methodology you are using.Bramework
Outputs - 20
Detection Rate - 41%
10 Failed Outputs used in "Content Improver" - 34%
It's the average detection rate.I don't understand this methodology you are using.
With 20 outputs, and a 41% detection rate, this means that almost 12 outputs were not detected, so only 8 failed outputs.
Where did you get the 2 extra failed outputs
So in this case, you got 10 failed outputs meaning, those 10 where like 10% or less so basically the other 10 where like super great to average 41%, like 95-100% real, right?It's the average detection rate.
Just an FYI, if a GPT-2 detector can reasonably identify content as being human or fake, then it is safe to say that google can also easily detect some of the dead giveaways. BERT can actually identify incorrect grammar with reasonable accuracy. All new content submitted for indexing is scanned by Google using various algorithms, BERT, MUM, RankBrain, Neural Network, etc. It doesn't really matter which algo is responsible for providing the final score. We only need to apply basic logic to deduce what is being scored as poor content.
Here are 4 dead giveaway patterns always exhibited by raw AI;
Redundant statements - sometimes 3 or more sentences saying the exact same thing, just reworded a bit.
Redundant Paragraphs
Incorrect grammar or term usage.
Shallow and Bland - provides no value, especially in contrast to the other 172 articles already indexed on the exact same topic.
Also to the OP, do you have a link to your procedure? I am just curious as to how you're doing this process... would be nice to see the work done.
So in this case, you got 10 failed outputs meaning, those 10 where like 10% or less so basically the other 10 where like super great to average 41%, like 95-100% real, right?
Or I am not getting something?
sounds like a good workflow.I don't usually use AI as a sole, but rather as a supplement to aid in the process.
I already have blog and article templates.
In addition, I conduct SEO research, create a topic cluster around one topic, and create a list of blogs to write.
And then I do my own search and collect content and images,
excluding articles on the web such as YouTube podcasts, pdfs, and social media (now that's easy for me because I am used to it).
Then I use an AI tool for getting content for the blog, which I can add to existing resources.
By linking, you can compile both the AI writing tool and the templates and write ten articles in a single sitting. Because of this process, it is quick.
then compile both the AI writing tool and the templates, and write 10 articles in one sitting. Because of this process, it is quick.
Finally, check plagiarism, proofreading, and grammar, then forward to the designer.
(Note: I have clients who want quality blogs rather than quantity, so this is my basic laborious process.)
interested in sharing cost for writesonicQuillbot seems like a good tool for a big part of that
The question right now, IMO, is what tool has the best cost/performance to help in the process of (good) writing.
I would love to have Writesonic, but it's too expensive. (If someone wants to share the cost, it has 5 seats, and I'm in)
Rytr has a good price, but is it good?
Too many products and little distinction between many of them.