Is there a maximum amount of header redirects allowed?
I have a process set up which basically runs through a bunch of import pages, and once a page is finished it moves onto the next. I've been redirecting with the header as such...
But it keeps hanging up. At first I thought the page it was dying on was at fault, but after about an hour of going over and over it I decided to just start the import 2 pages in, and sure enough... it hung 2 pages further. After a bit of testing around I realized no matter where I start it, I get exactly 20 header redirects before it won't do any more. This is the error message it gives me...
Apparently it thinks it is stuck in a neverending loop so it just stops after 20? But I need about 60 page redirects or so to run this thing. Is this something that can be changed in PHP configuration or is it a Firefox issue? Either way, do I need to change my whole approach or is there an easy way to solve it? All of the pages are already built and need to also be able to be run individually, it would be a very large hassle to have to start combining them or something.
Firefox has detected that the server is redirecting the request for this address in a way that will never complete.
The browser has stopped trying to retrieve the requested item. The site is redirecting the request in a way that will never complete.
* Have you disabled or blocked cookies required by this site?
* NOTE: If accepting the site's cookies does not resolve the problem, it is likely a server configuration issue and not your computer.
PS. There are no cookies in this code.
Last edited by xvszero; 12-04-2008 at 12:56 PM.
I won't admit to understanding exactly why you want/need all these redirects, but it seems to be you might be better off with a succession of include()'s and/or require()'s instead of redirects, perhaps?
Well I don't fully understand either, I'm still kind of an amatuer. But essentially we have about 50 or so imports that all have their own PHP pages, and we need an automated process that runs one after another every night. I have all that working except, as stated, it dies after 20 or so redirects.
I suppose includes would work just the same though? Albeit its a couple hour long process and then the original page would need to be running the whole time... is that a problem? I've had some issues in the past with very long pages just dying randomly, despite putting in very long max execution times. Then again I always suspected that was server hickups, which would kill the process regardless.
The includes seem to work but now the importtracking function doesn't pass the individual page names. It just passes the main page that has included it.
Is there any way to get it to pass the individual page names without having to type each one in individually?
But still I really don't know what's going on. My gut instinct is that whatever all these separate pages are doing should be in one function (or class, perhaps), then that function would get called in a loop through an array or database result set that would provide the parameters needed for each function call.
I think you're doing something like this:
index.php - got to index2.php
index2.php - go to index3.php
index3.php - go to index.php
what are you doing to need 60 redirects!?
int network.http.redirection-limit default(20)
changing that should do the trick. I doubt your users would change it though.
Last edited by Tabo; 12-05-2008 at 07:46 PM.
You might was to consider some kind of task server to complete this reliably.
A database could store the path to the scripts to run and whether it has been completed.
Table Columns for taskserver
taskid path completed
SELECT * FROM taskserver WHERE completed=0
create an array of uncompleted taskids
loop the array, include the file and update the table
//script runs then update taskserver
UPDATE taskserver SET completed=1 where id="taskid"
You can build an array with parts of the query in the loop dynamically, to only update one time rather than on each iteration. Google multiple update mysql.
Have the taskserver script run at a scheduled time with crontab and on boot in case the server went down in the middle of a task.
Anti Linux rants are usually the result of a lack of Linux experience, while anti Windows rants are usually a result of a lot of Windows experience.
The number of ways to resolve this issue is undoubtedly unfathomably enormous ... but, I too am really confused as to what your end goal is ...
In any case, for weird things, use AJAX. Or pseudo-AJAX:
Will something like that suit your needs?
var temp_node = document.createElement('script');
temp_node.src = '/path/to/step2.php';
I understand that logic, but really each import is so specialized that it'd be a very, very... VERY long function. Tons of what the company manager calls "massaging" the data going on in specific ways for each import. It's easier to just keep them separated.
Originally Posted by NogDog
Sounds like an excellent candidate for object-oriented coding. You might have one (or a few) abstract classes with any common functionality defined in its methods, then abstract methods for the functionality that needs to be separately tailored for each variation. Then each of your separate files would become class definitions which extend that abstract class and which would be instantiated within the master script. Possibly a factory pattern would be useful here.
^^^ my thoughts exactly, you are discovering first hand why procedural coding is not so well suited to large and complex problems as a well thought out OO approach.
I kind of vaguely remember OOP from school.
I think though, when I say a bunch of imports, I am somewhat giving off the wrong impression. Though they are imports, there are so many specific things going on besides the actual import that for the fact that there ARE a limited amount (I think it's actually closer to 25-30 than the 60 I originally thought) it is probably still an easier approach to just handle each one by one than set up objects.
For instance, on the phone marketing import I have to first find all the customers that will be excluded from being imported into the table. So you have to grab from one table to see if a customer has a "do not call" etc. request connected to them. Then you need to exclude all customers in the bankruptcies database, because we're not allowed to contact customers in bankruptcy proceedings. Then you exclude those connected to dealers we no longer support... that's another table. Then there are a few more exclusions from a completely different database... well, 4 different databases. I can't really get those into our current database without redoing a lot of design done before I got here and well, this project is *supposedly* supposed to be done by Christmas, so that's not happening.
So now we can finally import from the customer + account + a few other tables with fields we need... into the phone marketing table for each of our two companies.
And then we use those to create what are called "non" phone marketing tables (I have no idea why, the name is very deceiving) which are, essentially, taking any customer who is a customer of one of the companies but NOT of the other and giving that data to the other so they can market to that customer.
This whole design is a bit different than say... importing into the delinquent account tables. Which you would *think* would be as simple as calculating who is delinquent and who isn't, but yet again there is a lot more specific stuff involved in determining what "delinquent" even means.
So while a lot of this could all be set up with classes and objects, most of it is very specific stuff that simply isn't done on the other imports. The only obvious similarity is that, eventually, fields get imported into a database. But that part is just a few lines of code, the "massaging" is the bigger part. Is it really worth it to go OOP when the imports, as it stands, are all already working on their own pages?
Here's an off-the-wall and off-the-top-of-my head idea: have your main script call each of the others via cURL. This way each script is still processed in a stand-alone mode as when you do the redirects, but the main script never redirects. You might, however, want to have it flush some output between each call so that the browser doesn't think it timed out.
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "http://www.example.com/");
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$output = curl_exec($ch);
$scripts = array(
foreach($scripts as $script)
$result = myCurl('http://localhost/directory/'.$script);
echo "<h3>Result for '$script':</h3>\n$result";
I honestly don't even know what cURL is, though it more and more seems like something I am going to need to figure out. Do I have to configure the server a specific way to use it? My boss sort of fired the server guy and I'm hesistant to mess with server configuration unless it is very simple.
It's normally enabled in most PHP installations. You could either check the output of phpinfo() to see if it's enabled, or just do a quick test:
echo (function_exists('curl_init')) ? 'Yes' : 'No';
Users Browsing this Thread
There are currently 1 users browsing this thread. (0 members and 1 guests)