Posts tagged with python

Google has the following docs for the ad manager here. Unfortunately their example:

# Set the start and end dates of the report to run (past 8 days). end_date = date.today() start_date = end_date - timedelta(days=8) # Create report job. report_job = {     'reportQuery': {         'dimensions': ['LINE_ITEM_ID', 'LINE_ITEM_NAME'],         'columns': ['AD_SERVER_IMPRESSIONS', 'AD_SERVER_CLICKS',                     'AD_SERVER_CTR', 'AD_SERVER_CPM_AND_CPC_REVENUE',                     'AD_SERVER_WITHOUT_CPD_AVERAGE_ECPM'],         'dateRangeType': 'CUSTOM_DATE',         'startDate': start_date,         'endDate': end_date     } } # Initialize a DataDownloader. report_downloader = client.GetDataDownloader(version='v202008') try:   # Run the report and wait for it to finish.   report_job_id = report_downloader.WaitForReport(report_job) except errors.AdManagerReportError as e:   print('Failed to generate report. Error was: %s' % e) with tempfile.NamedTemporaryFile(     suffix='.csv.gz', mode='wb', delete=False) as report_file:   # Download report data.   report_downloader.DownloadReportToFile(       report_job_id, 'CSV_DUMP', report_file) 

yields a KeyError: 'date' on the report_job_id line. My authorization is correct and I can make other calls with my client. My question is, how does one need to update report_job in order for the example to work. I tried changing 'dateRangeType' however this states it must be 'CUSTOM_DATE'.

When you set up a campaign in google adwords you can add negative keywords to it so that the searchquery may not match your campaign if it has the negative keyword.

I want to extract the list of the negative keywords per each campaign. In the documentation I was able to find this example:

def retrieve_negative_keywords(report_utils)   report_definition = {     :selector => {       :fields => ['CampaignId', 'Id', 'KeywordMatchType', 'KeywordText']     },     :report_name => 'Negative campaign keywords',     :report_type => 'CAMPAIGN_NEGATIVE_KEYWORDS_PERFORMANCE_REPORT',     :download_format => 'CSV',     :date_range_type => 'TODAY',     :include_zero_impressions => true   }   campaigns = {}   report = report_utils.download_report(report_definition)   # Slice off the first row (report name).   report.slice!(0..report.index("\n"))   CSV.parse(report, { :headers => true }) do |row|     campaign_id = row['Campaign ID']     # Ignore totals row.     if row[0] != 'Total'       campaigns[campaign_id] ||= Campaign.new(campaign_id)       negative = Negative.from_csv_row(row)       campaigns[campaign_id].negatives << negative     end   end   return campaigns end 

Which is written in Ruby and there are no Python examples for this task. There is also a report for the negative keywords but it holds no metrics and I can't use it to retrieve the list of negative keywords per each campaign.

I am using this structure to query the database:

report_query = (adwords.ReportQueryBuilder()                         .Select('CampaignId', 'Id', 'KeywordMatchType', 'KeywordText')                         .From('CAMPAIGN_NEGATIVE_KEYWORDS_PERFORMANCE_REPORT')                         .During('LAST_7_DAYS')                         .Build()) 

But querying it gives an error:

googleads.errors.AdWordsReportBadRequestError: Type: QueryError.DURING_CLAUSE_REQUIRES_DATE_COLUMN

When I add Date it throws the same error.

Has anyone been able to extract the negative keyword list per campaign using Python with the Google Adwords API reports?

I've been trying to get the results from the bid simulator in Google ads via the API but have not been successful. I have tried to follow the steps outlined by google in these guides:

https://support.google.com/google-ads/answer/9634060?hl=en https://developers.google.com/adwords/api/docs/guides/bid-landscapes#python_3

I have very slightly modified the code and it does run:

from googleads import adwords CAMPAIGN_ID = '---------' PAGE_SIZE = 100 def main(client, campaign_id):     # Initialize appropriate service.     data_service = client.GetService('DataService', version='v201809')     # Get all the campaigns for this account.     selector = {         'fields': ['CampaignId', 'CriterionId', 'StartDate', 'EndDate',                    'BidModifier', 'LocalClicks', 'LocalCost', 'LocalImpressions',                    'TotalLocalClicks', 'TotalLocalCost', 'TotalLocalImpressions',                    'RequiredBudget'],         'paging': {             'startIndex': 0,             'numberResults': PAGE_SIZE         },         'predicates': [{             'field': 'CampaignId', 'operator': 'IN', 'values': [campaign_id]         }]     }     # Set initial values.     offset = 0     more_pages = True     while more_pages is True:         num_landscape_points = 0         page = data_service.getCampaignCriterionBidLandscape(selector)         # Display results.         if 'entries' in page:             for bid_modifier_landscape in page['entries']:                 num_landscape_points = 0             print(f'Found campaign-level criterion bid modifier landscapes for '                   f"criterion with ID {bid_modifier_landscape['criterionId']},"                   f" start date {bid_modifier_landscape['startDate']}, end date     {bid_modifier_landscape['endDate']},"                   f" and landscape points:")             for landscape_point in bid_modifier_landscape['landscapePoints']:                 num_landscape_points += 1                 print(f"\tbid modifier: {landscape_point['bidModifier']},"                       f" clicks: {landscape_point['clicks']},"                       f" cost: {landscape_point['cost']['microAmount']},"                       f" impressions: {landscape_point['impressions']},"                       f"total clicks: {landscape_point['totalLocalClicks']},"                       f" total cost: {landscape_point['totalLocalCost']['microAmount']},"                       f" total impressions: {landscape_point['totalLocalImpressions']},"                       f"and required budget: {landscape_point['requiredBudget']['microAmount']}")         else:             print('No bid modifier landscapes found.')         # Need to increment by the total # of landscape points within the page,         # NOT the number of entries (bid landscapes) in the page.         offset += num_landscape_points         selector['paging']['startIndex'] = str(offset)         more_pages = num_landscape_points >= PAGE_SIZE if __name__ == '__main__':     # Initialize client object.     adwords_client = adwords.AdWordsClient.LoadFromStorage()     main(adwords_client, CAMPAIGN_ID) 

This does not however let me get the predicted conversion value, only the clicks and impressions etc which is not really what I am looking for. This seems to line up with the documentation but in the GUI I can get conversion value but seemingly no matter what key I try to query for the API won't let me get the same simulator output as in the GUI.

Any thoughts?

I want to download app campaign report using python, The code is working with App reports but not for App engagement campaigns please help

report_downloader = adwords_client.GetReportDownloader(version='v201809')

      # Create report query.       report_query = (adwords.ReportQueryBuilder()                          .Select('CampaignId','CampaignName','CampaignStatus','CustomerDescriptiveName','AccountDescriptiveName','Date','DayOfWeek','Cost','Impressions','Clicks','Interactions','Engagements','TopImpressionPercentage','AbsoluteTopImpressionPercentage','Conversions')                       .From('CAMPAIGN_PERFORMANCE_REPORT')                       .During('YESTERDAY')                       .Build())            # You can provide a file object to write the output to. For this       # demonstration we use sys.stdout to write the report to the screen.       report_downloader.DownloadReportWithAwql(           report_query, 'CSV',output, skip_report_header=True,           skip_column_header=True, skip_report_summary=True,           include_zero_impressions=True)       output.seek(0) 

How do I use Google protocol buffers in a multiprocess script?

My use case is:

  • pulling data from the new Google Ads API
  • appending the objects with metadata
  • modelling using the objects
  • pushing the results to a database

AdWords Campaign Wrapper Object

I have an existing process for the old AdWords API, where I pull the data and store it in custom classes, e.g.

class Campaign(Represantable):     def __init__(self, id, managed_customer_id, base_campaign_id, name, status, serving_status):         self.id = id         self.managed_customer_id = managed_customer_id         self.base_campaign_id = base_campaign_id         self.name = name         self.status = status         self.serving_status = serving_status @classmethod     def from_zeep(cls, campaign, managed_customer_id):         return cls(             campaign.id,             managed_customer_id,             campaign.baseCampaignId,             campaign.name,             campaign.status,             campaign.servingStatus         ) 

Multiprocessing script

If I want to pull campaigns from a dozen accounts, I can run the scripts that populate the Campaign objects in parallel using pathos (again code simplified for this example):

import multiprocessing as mp from pathos.pools import ProcessPool class WithParallelism(object):     def __init__(self, parallelism_level):         self.parallelism_level = parallelism_level     def _parallel_apply(self, fn, collection, **kwargs):         pool = ProcessPool(             nodes=self.parallelism_level         )                  # this is to prevent Python from printing large traces when user interrupts execution (e.g. Ctrl+C)         def keyboard_interrupt_wrapper_fn(*args_wrapped):             try:                 return fn(*args_wrapped, **kwargs)             except KeyboardInterrupt:                 pass             except Exception as err:                 return err         errors = pool.map(keyboard_interrupt_wrapper_fn, collection)         return error 

Google Ads Campaign Wrapper Object

With the new API, I planned to store the protobuf object within my class, and use pointers to access the objects attributes. My class is a lot more complex than the example, using descriptors and subclass init for the attributes, but for simplicity it's effectively something like this:

class Campaign(Proto):     def __init__(self, **kwargs):         if "proto" in kwargs:             self._proto = kwargs['proto']         if "parent" in kwargs:             self._parent = kwargs['parent']         self._init_metadata(**kwargs) @property     def id(self):         return self._proto.id.value @property     def name(self):         return self._proto.name.value    ... 

This has the added advantage of being able to traverse the parent Google Ads object, to extract data from that protobuf object.

However, when I run my script of to populate these new objects in parallel, I get a pickle error. I understand that multiprocess uses pickle to serialize the objects, and one of the key advantages of protobuf objects is that they can be easily serialized.

How should I go about pulling the new Google Ads data in parallel:

  • Should I be serializing and deserializing the data in the Campaign object using SerializeToString
  • Should I just be extracting and storing the scalar data (id, name) like how I did with AdWords
  • Is there an entirely different approach?