Menu
Folks,
Which is better for a crawler to use to fetch a page ?
file_get_contents()
file_get_html()
open_file()
read_file()
I hear the first one got bad limitation. Hence people use PHP cURL instead.
I don’t like cURL. Too many lines of code to achieve a single purpose.
Is file_get_html() safe & sound due to no limitations, like the file_get_contents() ?
Any input you would like to provide ?
Sag, which PHP function you yourself would use if you were building a web crawler ? You do want it to successfully fetch pages without encountering limitations, chokes, hesitations, struggles, falters, deadends, etc.