/    Sign up×
Community /Pin to ProfileBookmark

Which PHP File Better To Fetch Or Crawl Pages ?

Folks,

Which is better for a crawler to use to fetch a page ?
file_get_contents()
file_get_html()
open_file()
read_file()

I hear the first one got bad limitation. Hence people use PHP cURL instead.
I don’t like cURL. Too many lines of code to achieve a single purpose.
Is file_get_html() safe & sound due to no limitations, like the file_get_contents() ?

Any input you would like to provide ?
Sag, which PHP function you yourself would use if you were building a web crawler ? You do want it to successfully fetch pages without encountering limitations, chokes, hesitations, struggles, falters, deadends, etc.

to post a comment

0Be the first to comment 😎

×

Success!

Help @developer_web spread the word by sharing this article on Twitter...

Tweet This
Sign in
Forgot password?
Sign in with TwitchSign in with GithubCreate Account
about: ({
version: 0.1.9 BETA 4.26,
whats_new: community page,
up_next: more Davinci•003 tasks,
coming_soon: events calendar,
social: @webDeveloperHQ
});

legal: ({
terms: of use,
privacy: policy
});
changelog: (
version: 0.1.9,
notes: added community page

version: 0.1.8,
notes: added Davinci•003

version: 0.1.7,
notes: upvote answers to bounties

version: 0.1.6,
notes: article editor refresh
)...
recent_tips: (
tipper: @Yussuf4331,
tipped: article
amount: 1000 SATS,

tipper: @darkwebsites540,
tipped: article
amount: 10 SATS,

tipper: @Samric24,
tipped: article
amount: 1000 SATS,
)...